Force capistrano to ask for password immediately - capistrano

I have a cap script that performs some lengthy compilation steps before running any remote code. I would like to be able to walk away while all this is happening, but as soon as the compilation is finished it will ask me for the remote server password. Is there a way I can force the script to ask for the password immediately, so that I can leave the rest to run unattended?
I know that I can set up passwordless ssh to avoid the password prompt entirely, but I am looking for a method that will allow unattended deploys for users that don't have passwordless ssh set up yet.
I feel like I have seen a simple solution for this somewhere, but I am having trouble finding the correct search terms.

If you're running most commands via sudo (that is, you have set :use_sudo, true) you can probably do this by hooking before "deploy", "ask_for_password", and create a task "ask_for_password" and immediately using it to perform any command with sudo, such as sudo date. Sudo will prompt the first time only, then presumably has a long enough timeout to get through the deploy.
If that doesn't work...
...we're talking about capistrano -- ain't nothin' simple with capistrano. It's an incredibly powerful tool, and I don't know anyone who finds it "simple".
Instead of setting up everyone to be able to deploy, maybe set up a host that you can let people ssh into as a user like "deployer", then have that execute the deploys.
But deploying is a pretty significant task -- not everyone should be able to do it, especially to production. I think you're better off installing passwordless public keys of users having authority to deploy on servers they are permitted to deploy to (e.g. more to test than to staging, or production).

Related

How to run powershell script remotely using chef?

I have powershell script which is present on chef server to run on remote windows server, how can i run this powershell script from chef server on remote windows server.
Chef doesn't do anything like this. First, Chef Server can never remotely access servers directly, all it does is stores data. Second, Chef doesn't really do "run a thing in a place right now". We offer workstation tools like knife ssh and knife winrm as simplistic wrappers but they aren't made for anything complex. The Chef-y way to do this would be to make a recipe and run your script using the the powershell_script resource.
Does it mean chef is also running on Windows server ?
If yes, why not to use psexec from Windows Ps tools ?
https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
Here is my understanding of what you are trying to achieve. If I'm wrong then please correct me in a comment and I will update my answer.
You have a powershell script that you need to run on a specific server or set of servers.
It would be convenient to have a central management solution for running this script instead of logging into each server and running it manually.
Ergo you either need to run this script in many places when a condition isn't filled, such as a file is missing, or you need to run this script often, or you need this script to be run with a certain timing in regards to other processes you have going on.
Without knowing precisely what you're trying to achieve with your script the best solution I know of is to write a cookbook and do one of the following
If your script is complex place it in your cookbook/files folder (assuming the script will be identical on all computers it runs on) or in your cookbook/templates folder (if you will need to inject information into it at write time). You can then write the .ps file to the local computer during a Chef converge with the following code snippet. After you write it to disk you will also have to call it with one of the commands in the next bullet.
Monomorphic file:
cookbook_file '<destination>' do
source '<filename.ps>'
<other options>
end
Options can be found at https://docs.chef.io/resource_cookbook_file.html
Polymorphic file:
template '<destination>' do
source '<template.ps.erb>'
variables {<hash of variables and values>}
<other options>
end
Options can be found at https://docs.chef.io/resource_template.html
If your script is a simple one-liner you can instead use powershell_script, powershell_out! or execute. powershell_out! has all the same options and features as the shell_out! command and the added advantage that your converge will pause until it receives an exit status for the command, if that is desirable. The documentation on using it is a bit more spotty though so spend time experimenting with it and googling.
https://docs.chef.io/resource_powershell_script.html
https://docs.chef.io/resource_execute.html
Which ever option you end up going with you will probably want to guard your resource with conditions on when it should not run, such as when a file already exists, a registry key is set or what ever else your script changes that you can use. If you truly want the script to execute every single converge then you can skip this step, but that is a code smell and I urge you to reconsider your plans.
https://docs.chef.io/resource_common.html#guards
It's important to note that this is not an exhaustive list of how to run a powershell script on your nodes, just a collection of common patterns I've seen.
Hope this helped.

Google Cloud Storage - GSUtil Update fails because file used by another process

We use an ETL process to pull data from Google Cloud Storage, but annoyingly it hangs everytime Google releases udpates to GSUtil, because it sits at a prompt asking if you want to update the library. Fine if you are doing this manually, but not cool when it's being run in an automated SSIS package, as jobs don't finish for days and you keep wasting time with the same stupid cause.
I thought I was going to be cleaver, and add "python gsutil update -n" to the top of the bash script I'm automating the building/execution of in my SSIS Package in the hope to curb this problem, but when I run this command from the prompt in either Windows Server 2008r2 or Windows 7 I get the following:
C:\gsutil>python gsutil update -f -n
Copying gs://pub/gsutil.tar.gz...
OSError: The process cannot access the file because it is being used by another process.
Any help?
P.S. - Also, Google engineers... can you PLEASE remove these prompts? for all of us using these tools in automated processes? I have other things to work on, instead of constantly going back to things like this every few days/weeks.
What version of gsutil are you running?
Also, to be clear: Are you talking about the fact that gsutil checks for available software updates periodically, and if it finds them it then prompts you whether you want to update? Or are you talking about the fact that the gsutil update command asks if you want to perform the update?
If the former, gsutil shouldn't be performing this check/prompting if you are running gsutil from a script not connected to at TTY. If that's not working correctly we'd like to know.
And also, if that's the problem you're having, you can completely disable automated software update checks by setting software_update_check_period=0 in the [GSUtil] section of your .boto config file.

Interactive service Logged in as user

So we are trying to setup a Continuous Integration server at my company. What we need to do is svn update the working copy on the server, then build it, start the site using IIS express and then run Watin/Specflow tests on it. I'm using rake inside of CCNet to automate all of this. We are running CCNet as a service and logging in as a build agent because svn uses our domain login credentials in order to authenticate. I've been unable to call the command line "svn update --username user --password pass" because of this. Yet Watin needs to be run in an interactive mode, and the service won't let me . I'm able to get it to work if we manually log on to the server and run ccnet as command line. Unfortunately the Build Agent also logs out of that user account, closing any command lines with it (I don't know why they need it to do this but they do). So is it possible to run a service in interactive mode if its signed in as a user?
If you have access to two servers you can build (can also work from computer to server)
Automated remote desktop - in windows form
see this post http://www.codeproject.com/Articles/43705/Remote-Desktop-using-C-NET
from one server to log into the server you need to run the Watin tests on and in the scheduled task, have the tests to come on after the log in has happened. This then gives the impression that the service is interacting with the desktop.
If you need any more information let me know

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.

What are the advantages of rsh versus Perl's Expect.pm?

I have a Perl Expect.pm script that does some moderately complex stuff like packaging applications, deploying the application, checking for logs, etc. on multiple remote unix hosts.
My predecessor had written similar scripts using rsh.
Is there a better approach between the two? Or should I use something all together different?
I am guessing somebody will bring up SSH; it's basically replacement for rsh, right? Unfortunately, however SSH is not an option for me right now.
Another thing I should add is that after logging in I need to be able to SUDO to a particular user to do most of the actions on the remote hosts.
Another thing I should add is that after logging in I need to be able to SUDO to a particular user to do most of the actions on the remote hosts.
To address this one particular point: using rsh (or ssh) you can specify which user to become in the remote session:
$ rsh -l username hostname
There's no need to use sudo in this case. Now would definitely be the time to look into ssh due to security issues. The syntax is the same, but ssh also allows a slightly different (and I'd say better) syntax:
$ ssh username#hostname
I found expect to be too finicky, but my experience with it is not substantial.
They do different things. Expect is a way to script what would otherwise be manual responses. rsh -- remote shell, not restricted shell, an unfortunate name clash -- allows you to run commands remotely on another system.
That said, the security holes and other disadvantages of using rsh to do remote commands, run sudo, etc, are immense.
I can see multiple ways of doing this:
Expect over telnet, rsh, or ssh
pros: single connection, fewer escaping issues
cons: fragility in a changing environment
rsh/ssh each command individually
pros: fewer escaping issues, more reliable in a changing environment
cons: each connection takes time for authentication, and, for ssh, handshaking the encryption
rsh/ssh all commands at once
pros: single connection (less overhead), more reliable than expect
cons: fragility in maintenance especially as you get more than a handful of statements in there, escaping issues are more prevalent (escape in perl so that it's still escaped by rsh/ssh so that it's still escaped by the remote shell so that it's properly handled by the sudo'd remote shell?)
rsh/ssh and run a script
pros: single connection, more reliable, more maintainable
cons: finding a way to get it over there (rcp/scp work, NFS works, you need to determine the best way for you).
All things considered, this is the most minor con as you could simply do something like
open my $fh, "|ssh user#host 'cat > /tmp/myscript'";
print $fh $script;
system qw(ssh user#host), "chmod u+x /tmp/myscript; /tmp/myscript; rm /tmp/myscript";
Of course, you'd add in some error handling (failed open, what if /tmp/myscript exists, etc.), but that's the idea.
Given a choice between rsh and telnet via expect, I would chose rsh. Expect scripts are fragile. All it takes to break one is someone changing the value of PS1 on the remote machine. Using rsh also prepares you for the day you will finally enter the '90s and start using ssh (since you can mostly just change rcp to scp and rsh to ssh and everything still works).