ssh stops in btw while commands specified in a function - fish

~/.config/fish/config.fish
function start
nohup VBoxHeadless -startvm $argv
end
function stop
VBoxManage controlvm $argv poweroff
end
function vm
stop vm18 & start vm18 ; sshpass -p vm18 ssh archvm
end
Test:
$ vm
VBoxManage: error: Machine 'vm18' is not currently running
appending output to nohup.out
It never does ssh for some reason. However after the machine has started if I do
$ sshpass -p vm18 ssh archvm
it works perfectly.
I don't understand why, and how to fix. I assuming appending output thing stops from running the next command.
One way to fix this is: add & in start
function start
nohup VBoxHeadless -startvm $argv &
end
Though I don't know why that works.

Related

Running sudo on user-defined functions in fish

» cat ~/.config/fish/config.fish
function take
command mkdir $argv;and cd $argv
end
function check
sudo dmesg -c>/dev/null;
make clean; make;
/usr/local/bin/kedr start $argv;
sudo insmod "$argv.ko"; sudo rmmod $argv;
/usr/local/bin/kedr stop
dmesg;
end
function sudo
if functions -q $argv[1]
set argv fish -c "$argv"
end
command sudo $argv
end
While running I get this error:
» sudo check "simple-no-macro"
fish: Unknown command 'check simple-no-macro'
fish:
check simple-no-macro
^
You've asked this on GitHub as well, so here's my answer from there:
The problem here is that the function you've defined isn't present in the new instance of fish you start.
You'd be better off defining the check function in a file saved in ~/.config/fish/functions/check.fish, which will then let the function work across instances.
Side note: bash does let you export functions across instances using environment variables, but both zsh and ksh use a similar method to fish - see Propagating shell functions from Unix Power Tools.

execute buffer content in current shell upon exec ssh

My perl code does this
#!/usr/bin/perl
sleep 2;
exec 'ssh', '-o', "ConnectTimeout=10", "newhost", "sleep 3;pwd";
the problem is when sleep is executing what ever I type on the terminal (during execution)
disappears to non interactive shell in newhost.
eg:
user#a02$perl test.pl
ls
user#a02$ #ls is not executed
command ls executes if I don't use exec or system.
Is there a way to execute the contents of the buffer?
After a bit of digging i found ssh has has an stdio redirector when used with -n option.
http://www.pixelbeat.org/programming/stdio_buffering/
To tell ssh that the remote command doesn't require any input use the -n option

How to run a bash command as a different user in Capistrano?

How would I accomplish the following in Capistrano?
sudo su - postgres
/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data/
The following task doesn't work:
task :postgres_check do
on roles(:db) do in: :sequence |host|
execute "sudo su - postgres << EOF
/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data/
EOF"
end
end
The commands in the execute statement works in a bash script.
EDIT 1:
I also tried the following:
task :postgres_check do
on roles(:postgres_pref_db), in: :sequence do |host|
execute "/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data", :shell => "sudo su - postgres"
end
end
Which errors with:
DEBUG [68eb95f2] Command: /usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data
DEBUG [68eb95f2] pg_ctl: could not open PID file "/var/lib/pgsql/9.2/data/postmaster.pid": Permission denied
cap aborted!
SSHKit::Command::Failed: /usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data stdout: Nothing written
It appears that it still executing the command as the ssh user.
I came across this and explored the answer for myself. I wouldn't have accepted the answer either to I'll provide what I did.
task :copy_files do
on roles(:web) do |host|
as 'other_user' do
execute "whoami"
end
end
end
Capistrano 3 uses SSH KIT and I found these examples really helpful for getting bash commands to work inside my tasks.
https://github.com/capistrano/sshkit/blob/master/EXAMPLES.md
You'll want to checkout ssh kit and see about on(), within(), with(), as() ... they can be used nested in any order. So you end up having a lot of control even if it takes a few minutes to learn.
I think for your specific example you will want to use as() and within() to become the postgres user and run commands within a certain directory.
Also I had to disable requiretty on my /etc/sudoers for my deploy user.

Upstart / init script not working

I'm trying to create a service / script to automatically start and controll my nodejs server, but it doesnt seem to work at all.
First of all, I used this source as main reference http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
After testing around, I minimzed the content of the actual file to avoid any kind of error, resulting in this (the bare minimum, but it doesnt work)
description "server"
author "blah"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/var/www"
exec nodejs /var/www/server/server.js >> /var/log/node.log 2>&1
end script
The file is saved in /etc/init/server.conf
when trying to start the script (as root, or normal user), I get:
root#iof304:/etc/init# start server
start: Job failed to start
Then, I tried to check my syntax with init-checkconf, resulting in:
$ init-checkconf /etc/init/server.conf
File /etc/init/server.conf: syntax ok
I tried different other things, like initctl reload-configuration with no result.
What can I do? How can I get this to work? It can't be that hard, right?
This is what our typical startup script looks like. As you can see we're running our node processes as user nodejs. We're also using the pre-start script to make sure all of the log file directories and .tmp directories are created with the right permissions.
#!upstart
description "grabagadget node.js server"
author "Jeffrey Van Alstine"
start on started mysql
stop on shutdown
respawn
script
export HOME="/home/nodejs"
exec start-stop-daemon --start --chuid nodejs --make-pidfile --pidfile /var/run/nodejs/grabagadget.pid --startas /usr/bin/node -- /var/nodejs/grabagadget/app.js --environment production >> /var/log/nodejs/grabagadget.log 2>&1
end script
pre-start script
mkdir -p /var/log/nodejs
chown nodejs:root /var/log/nodejs
mkdir -p /var/run/nodejs
mkdir -p /var/nodejs/grabagadget/.tmp
# Git likes to reset permissions on this file, but it really needs to be writable on server start
chown nodejs:root /var/nodejs/grabagadget/views/layout.ejs
chown -R nodejs:root /var/nodejs/grabagadget/.tmp
# Date format same as (new Date()).toISOString() for consistency
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/nodejs/grabagadget.log
end script
pre-stop script
rm /var/run/nodejs/grabagadget.pid
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/nodejs/grabgadget.log
end script
As of Ubuntu 15, upstart is no longer being used, see systemd.

Why is sleep needed after fabric call to pg_ctl restart

I'm using Fabric to initialize a postgres server. I have to add a "sleep 1" at the end of the command or the postgres server processes die without explanation or an entry in the log:
sudo('%(pgbin)s/pg_ctl -D %(pgdata)s -l /tmp/pg.log restart && sleep 1' % env, user='postgres')
That is, I see this output on the terminal:
[dbserv] Executing task 'setup_postgres'
[dbserv] run: /bin/bash -l -c "sudo -u postgres /usr/lib/postgresql/9.1/bin/pg_ctl -D /data/pg -l /tmp/pg.log restart && sleep 1"
[dbserv] out: waiting for server to shut down.... done
[dbserv] out: server stopped
[dbserv] out: server starting
Without the && sleep 1, there's nothing in /tmp/pg.log (though the file is created), and no postgres processes are running. With the sleep, everything works fine.
(And if I execute the same command directly on target machine's command line, it works fine without the sleep.)
Since it's working, it doesn't really matter, but I'm asking anyway: Does someone know what the sleep is allowing to happen and why?
You might try also using the pty option set it to false and see if it's related to how fabric handles pseudo-ttys.