How to switch remote user in Capistrano - capistrano

In capistrano, I can set the :user variable to determine which user is logged to ssh, when executing remote commands. But I'd like to execute commands as different users depending on task. Is it possible? Something like run "command", :as => "bob" would be nice.

The docs: https://github.com/capistrano/capistrano/wiki/2.x-DSL-Action-Invocation-Run
You could use combination of :shell and &block:
run "echo am i bob ? :$USER:", :shell => "su - bob -s bash" do |channel, stream, data|
channel.send_data("#{bob_password}\n")
end

Related

Swift Cocoa - Process() doesn't allow sudo

I have made a cocoa swift application, in which there is a NSTextField. The textfield returns a password, which is then loaded in a Process(), that brings it into a bash command script.
The command is: Echo ${1} | sudo -S echo works
To the process I pass the argument of the password, so it goes instead of the ${1}. When I run the command, the console says that usr/bin/sudo: Operation not permitted.
Is there a way I can use sudo in my bash scripts?
Help would really be appreciated. Thanks.
This is not the correct way to escalate privileges on OS X, particularly because it's terrible security; your passing around the password in cleartext like this is a generally bad practice.
Let me illustrate why this is a problem. Suppose I have this bash script:
#!/bin/bash
sleep 10
echo ${1}
Then, I execute my script, using Process:
import Foundation
let process = Process()
process.launchPath = "/bin/bash"
process.arguments = ["/path/to/foo.sh", "This is a parameter"]
process.launch()
process.waitUntilExit()
Run the script, it outputs "This is a parameter", all seems well. But, while the script is running, in a separate Terminal window, I run this command:
$ ps -ajxww | grep foo.sh
And get the output:
username 85281 85276 85281 0 1 S s006 0:00.00 /bin/bash /path/to/foo.sh This is a parameter
As you can see, the parameter is plainly visible in the process list. If the parameter to that script had been my admin password, I would have just broadcast that password to every interested process on the system.
Anyway, instead of doing things like this, you should create a privileged launchd helper tool, which you can bless using SMJobBless(). The OS will prompt for the password in a secure way, and install your helper tool in a secure location. You can then communicate with your tool via XPC to have it do things as root. See Apple's EvenBetterAuthorizationSample for an idea of how to do this.

How to ssh as different user, change group, and run a script within Perl

I need to be able to run a script from within a script but first I need to ssh as a different user and then change my group.
I am currently doing the following inside my perl script:
`ssh <user>#<host> ; newgrp <group> ; /script/to/run.pl`
When running this command form the command line it doesn't seam to switch groups. I assume this is because it's changing to a new shell.
How do I get around this and get it to work?
Also, please note, I do not have sudo/root privelages.
The first semicolon is interpreted by the local shell. So the three commands are run on the same host. I think you want this
ssh <user>\#<host> "newgrp <grp>; /bin/run.pl"
salva, in his reply, answered my question:
sg $group -c '$cmd'
The reason the following command:
newgrp <int>
doesn't work is because it creates a new shell. At least that is my best guess. the "sg" command gets around this.
I have found the following to work (with ksh on hpux) :
ssh user#host "echo 'date;pwd;echo bozo;id' | newgrp nerds;"
which basically executes the commands as user:nerds :
I think OP wants to construct a string to execute from Perl, notice the backticks. Not sure but OP might have to use:
$s='ssh <user>#<host> ; newgrp <group> ; /script/to/run.pl'; # Normal single quotes not backticks
exec($s);
OP, there are different ways to execute shell functions from a Perl script. You used backticks. There is also exec($s) and system($s).

SSH executing create user command and send command if user exists

I have a website that send commands to a server via SSH2
Im trying to figure out 1 of 2 things. either
A. Set a username for the registered account that sends the command (IN SSH if username = exist then send command else create user by $username, set privileges to execute PERL command ONLY and then send the command)
OR
B. Set the sent perl commands pid to $username
The basic concept of this is, I want to be able to kill the command based off of the process ID set
OR
register a username and let the command run there so the user can kill the commands via that unique username, but ONLY allow kill perl and perl script execution on that username.
Example: perl $username script.pl command command2 command3 (This sets the username for the users process ID)
Example2: kill $username (This kills the process based off of that ID)
P.S. my server that is using ssh is running CentOS 6 if that helps any?
Here is my current script which i would like to modify:
<?php
set_time_limit(0);
ignore_user_abort(true);
if (!function_exists("ssh2_connect")) die("function ssh2_connect doesn't exist");
// log in at url/ip on port 22
if(!($con = ssh2_connect("************", 22))){
echo "fail: unable to establish connection\n";
} else {
// try to authenticate with username, password
if(!ssh2_auth_password($con, "**********", "***********")) {
echo "fail: unable to authenticate\n";
} else {
// execute a command
if (!($stream = ssh2_exec($con, "perl i.pl ".$_GET['command1']." ".$_GET['command2']." ".$_GET['command3']))) {
echo "fail: unable to execute command\n";
} else {
echo "" . stream_get_contents($stream);
echo "Commands have been executed successfully!";
}
}
}
?>
You can use the id command to check if a given username exists.
You can then use the useradd commands to create a new user. Check out useradd man page for the various options available.
It is not easily possible to create a user with the privilege to only execute just a single script. You can setup a chroot jail, but in most cases that's probably more trouble than it's worth. However, considering that you're trying to execute command with input from a web request, it's probably inevitable that you'll have to create a jail, because what you're trying to right now is basically opening up your remote server for malicious user to ransack to their heart's content.
You can use sudo with the -u option to run a script as another user. Example: sudo -u username ls
You can run a process in the background and get the process id by using the $! variable. Example: ls > /dev/null & echo $!. This process id can be used later to kill the process. Be careful however, since process id can get recycled, you may accidentally kill the wrong process. Running the kill command as the same unprivileged user as the command itself (as opposed to a user with higher privileges) is a good idea to at least prevent the kill command from accidentally killing another user's process.
All these steps are no different whether you execute it locally or from SSH.

PuTTY scripting to log onto host

I'm using PuTTY to remotely log onto my school's host. Upon logging in, we are required to do these steps:
enter username
enter password
command "add oracle"
command "sqlplus"
enter username
enter password
I will be logging into this host a lot over the course of this semester and I was hoping to create a script that would eliminate the redundancy of the above steps. Ignoring the obvious security oversights of having my password in the script, how would I achieve this? I have zero experience with scripting, so your feedback is greatly appreciated. Thanks!
Edit: I played around with the command-line options for Putty and I was able to bypass steps 1-2 using:
putty -load "host" -l username -pw password
I've also created a shell file that looks like so:
#!/bin/bash
add oracle10g
sqlplus username password
When I try to add this option to the command-line using the -m option, it looks like PuTTY logs into the host and then immediately exits. Is there a way to keep my session open after running the shell file or am I using the -m option wrongly? Here is a link to a PuTTY guide that I have been following: http://the.earth.li/~sgtatham/putty/0.60/htmldoc/Chapter3.html.
Here is the total command that I am trying to run from the command-line:
putty -load "host" -l username -pw password -m c:\test.sh
Figured this out with the help of a friend. The -m PuTTY option will end your session immediately after it executes the shell file. What I've done instead is I've created a batch script called putty.bat with these contents on my Windows machine:
#echo off
putty -load "host" -l username -pw password
This logs me in remotely to the Linux host. On the host side, I created a shell file called sql with these contents:
#!/bin/tcsh
add oracle10g
sqlplus username password
My host's Linux build used tcsh. Other Linux builds might use bash, so simply replace tcsh with bash and you should be fine.
To summarize, automating these steps are now done in two easy steps:
Double-click putty.bat. This opens PuTTY and logs me into the host.
Run command tcsh sql. This adds the oracle tool to my host, and logs me into the sql database.
I'm not sure why previous answers haven't suggested that the original poster set up a shell profile (bashrc, .tcshrc, etc.) that executed their commands automatically every time they log in on the server side.
The quest that brought me to this page for help was a bit different -- I wanted multiple PuTTY shortcuts for the same host that would execute different startup commands.
I came up with two solutions, both of which worked:
(background) I have a folder with a variety of PuTTY shortcuts, each with the "target" property in the shortcut tab looking something like:
"C:\Program Files (x86)\PuTTY\putty.exe" -load host01
with each load corresponding to a PuTTY profile I'd saved (with different hosts in the "Session" tab). (Mostly they only differ in color schemes -- I like to have each group of related tasks share a color scheme in the terminal window, with critical tasks, like logging in as root on a production system, performed only in distinctly colored windows.)
The folder's Windows properties are set to very clean and stripped down -- it functions as a small console with shortcut icons for each of my frequent remote PuTTY and RDP connections.
(solution 1)
As mentioned in other answers the -m switch is used to configure a script on the Windows side to run, the -t switch is used to stay connected, but I found that it was order-sensitive if I wanted to get it to run without exiting
What I finally got to work after a lot of trial and error was:
(shortcut target field):
"C:\Program Files (x86)\PuTTY\putty.exe" -t -load "SSH Proxy" -m "C:\Users\[me]\Documents\hello-world-bash.txt"
where the file being executed looked like
echo "Hello, World!"
echo ""
export PUTTYVAR=PROXY
/usr/local/bin/bash
(no semicolons needed)
This runs the scripted command (in my case just printing "Hello, world" on the terminal) and sets a variable that my remote session can interact with.
Note for debugging: when you run PuTTY it loads the -m script, if you edit the script you need to re-launch PuTTY instead of just restarting the session.
(solution 2)
This method feels a lot cleaner, as the brains are on the remote Unix side instead of the local Windows side:
From Putty master session (not "edit settings" from existing session) load a saved config and in the SSH tab set remote command to:
export PUTTYVAR=GREEN; bash -l
Then, in my .bashrc, I have a section that performs different actions based on that variable:
case ${PUTTYVAR} in
"")
echo ""
;;
"PROXY")
# this is the session config with all the SSH tunnels defined in it
echo "";
echo "Special window just for holding tunnels open." ;
echo "";
PROMPT_COMMAND='echo -ne "\033]0;Proxy Session #master01\$\007"'
alias temppass="ssh keyholder.example.com makeonetimepassword"
alias | grep temppass
;;
"GREEN")
echo "";
echo "It's not easy being green"
;;
"GRAY")
echo ""
echo "The gray ghost"
;;
*)
echo "";
echo "Unknown PUTTYVAR setting ${PUTTYVAR}"
;;
esac
(solution 3, untried)
It should also be possible to have bash skip my .bashrc and execute a different startup script, by putting this in the PuTTY SSH command field:
bash --rcfile .bashrc_variant -l
When you use the -m option putty does not allocate a tty, it runs the command and quits. If you want to run an interactive script (such as a sql client), you need to tell it to allocate a tty with -t, see 3.8.3.12 -t and -T: control pseudo-terminal allocation. You'll avoid keeping a script on the server, as well as having to invoke it once you're connected.
Here's what I'm using to connect to mysql from a batch file:
#mysql.bat
start putty -t -load "sessionname" -l username -pw password -m c:\mysql.sh
#mysql.sh
mysql -h localhost -u username --password="foo" mydb
https://superuser.com/questions/587629/putty-run-a-remote-command-after-login-keep-the-shell-running
I want to suggest a common solution for those requirements, maybe it is a use for you: AutoIt. With that program, you can write scripts on top of any window like Putty and execute all commands you want to (like button pressing or mouse clicking in textboxes or buttons).
This way you can emulate all steps you are always doing with Putty.
entering a command after you logged in can be done by going through SSH section at the bottom of putty and you should have an option Remote command (data to send to the server) separate the two commands with ;
mputty can do that but it does not seem to work always. (if that wait period is too slow)
mputty uses putty and it extends putty.
There is an option to run a script.
If it does not work, make sure that wait period before typing is a high value or increase that value. See putty sessions , then name of session, right mouse button,properties/script page.
For me it works this way:
putty -ssh root#1.1.1.1 22 -pw password
putty, protocol, user name # ip address port and password. To connect in less than a second.
You can use the -i privatekeyfilelocation in case you are using a private key instead of password based.

Capistrano leaving remote tail open

I am using capistrano in a rails-less environment and I'm having a problem with my remote tail task:
role :web, "pants#host1", "pants#host2"
task :weberror, :roles => :web do
stream("tail -f /var/log/httpd/error_log | sed \"s/^/\033[0;32m$HOSTNAME:\033[0m /\"")
end
If I press Ctrl+C to get out of the command, the tail command is left open on the server forever. Is there an alternate way to break with capistrano that cleans up the process or am I doing something wrong with my task?
Have you tried adding the pty option to stop buffering.
stream(..., :pty => true)