How to start soapui.sh under Linux in background?
I tried several variants like:
nohup ./soapui.sh < /dev/null 2>&1 > /dev/null &
nohup ./soapui.sh & < /dev/null 2>&1 > /dev/null &
When I performed exit and back to server again I haven't already seen SoapUI is running.
Try the below:
nohup ./soapui.sh &
Related
I have a Swift command line program which is running a server and prints the URL of the server when it starts. I'm then trying capture the URL in a bash shell variable so I can pass it to other programs.
Basically my Swift program looks like this
#main
struct MyApplication {
static func main() throws {
let server = try VoodooServer {
Endpoints.config
}
print(server.url.absoluteString)
server.wait()
}
}
and when I run it from the command line I get output that looks like this:
% .build/release/server run -c Tests/files/TestConfig3
http://127.0.0.1:8082
However when I try to capture the URL using
% export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3` &
[3] 19101
and then check the exported variables using export there's nothing there.
I've tried commenting out the wait() function so the server exits immediately and I get the URL in the variable. ie. running
% export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3`
% echo $SERVER_URL
http://127.0.0.1:8080
So I'm guessing the problem is that because the server is not exiting, the value is not being stored because stdout has not finished or something like that.
So how can I capture the output from the server into a variable without stopping it?
Your problem is the usage of &:
$ export HELLO=`echo world` &
[1] 3774017
[1]+ Done export HELLO=`echo world`
$ export | grep HELLO
$ export HELLO=`echo world`
$ export | grep HELLO
declare -x HELLO="world"
When you run a command "regularly", the shell just runs it as you would expect. Examples of regular running:
echo world
.build/release/server run -c Tests/files/TestConfig3
export HELLO=`echo world`
export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3`
When you run things with &, you're asking the shell to run them in the background, while you continue about your day.
That means that your shell has to continue accepting your command, but also run the background command.
So the shell launches a background shell where it runs your commands. Meaning, when you run:
export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3` &
The shell launches a background shell, and the background shell runs:
export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3`
That background shell will indeed export SERVER_URL to its own subprocesses, but your regular, foreground shell, isn't a subprocess of the background shell. Rather, the background shell is a subprocess of the foreground shell.
That is why the export isn't visible in the foreground shell.
Unfortunately, there's no simple way to capture that URL while the server is still running. What people usually do is have the server write that information to a file, so that the foreground shell can read the file, e.g.:
$ ( (sleep 1; echo world > config; sleep 50) & ) &
[1] 3775004
[1]+ Done ( ( sleep 1; echo world > config; sleep 50 ) & )
$ sleep 1
$ export HELLO=`cat config`
$ export | grep HELLO
declare -x HELLO="world"
(I have replaced your Swift server with a simple bash command that goes to the background via fancy bash syntax)
As you can see, the background process writes its configuration to the file config, but it's difficult to know when config will be written, so you have to resort to something more complex:
$ ( (sleep 10; echo world > config.tmp; mv config.tmp config; sleep 50) & )
&
[1] 3775481
[1]+ Done ( ( sleep 10; echo world > config.tmp; mv config.tmp config; sleep 50 ) & )
$ while ! [ -f config ]; do sleep 1; done
$ export HELLO=`cat config`
$ export | grep HELLO
declare -x HELLO="world"
Here, we're writing to config.tmp, and we're only renaming it to config after we finish, to ensure that when the foreground shell tries to read, it reads the full configuration after the server definitely finished writing it.
But on the foreground side, we actually have to wait for it to finish writing it, which is what the while loop is for.
I would like to connect to a VPN on start-up of OSMC.
Environment:
installed OSMC on Raspberry 2
downloaded, compiled and installed shrew soft vpn on the device
As user 'osmc' with ssh
> sudo iked starts the daemon successfully
> ikec -r "test.vpn" -a starts the client, loads the config and connects successfully
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo iked >> /home/osmc/iked.log 2>> /home/osmc/iked.error.log &
ikec -a -r "test.vpn" >> /home/osmc/ikec.log 2>> /home/osmc/ikec.error.log &
exit 0
after start of raspberry iked is as process visible with ps -e
but ikec is not running
osmc#osmc:~$ /etc/rc.local starts the script and connects to vpn successfully
Problem:
Why does the script not working correctly on start-up?
Thank you for your help!
I was also looking to do the same thing as you and ran into the same problem. I'm no linux expert, but I did figure out a workaround.
I created a script called ikec_after_reboot.sh and it looks like this...
$ cat ikec_after_reboot.sh
#!/bin/bash
echo "Starting ikec"
ikec -r test.vpn -a
I then installed cron.
sudo apt-get update
sudo apt-get install cron
Edit the cron job as root and run the ikec script 60 seconds after reboot.
sudo crontab -e
SHELL=/bin/bash
#reboot sleep 60 && /home/osmc/ikec_after_reboot.sh & >> /home/osmc/ikec.log 2>&1
Now edit your /etc/rc.local file and add the following.
sudo iked >> //home/osmc/iked.log 2>> /home/osmc/iked.error.log &
exit 0
Hopefully, this is helpful to you.
I looking for the right way to run shell script first boot Solaris.
I need to run resize command, there is a my script
#!/bin/sh -ux
echo "#!/bin/sh -ux" > /etc/rc3.d/S90scale
echo "/sbin/zpool set autoexpand=on rpool" >> /etc/rc3.d/S90scale
echo "/sbin/zpool online -e rpool c1d0" >> /etc/rc3.d/S90scale
echo "rm /etc/rc3.d/S90scale" >> /etc/rc3.d/S90scale
echo "/sbin/shutdown -y -i6 -g0" >> /etc/rc3.d/S90scale
chmod a+x /etc/rc3.d/S90scale
actually script working properly, but unfortunately resize do not work. When I do the same things from user session everything just fine.
What exactly I doing wrong?
Your method is not the "right" one to run a script once after boot as it uses the legacy approach. The correct way would be to create an smf service that runs once. However, it does work anyway with Solaris 10 and 11 as the rc scripts while deprecated are still processed so I won't elaborate more about smf.
The main issue is you don't check for errors and whatever happens, it remove the script and reboot preventing any analysis to occur.
I would suggest to modify your script to log what is happening in a file and quit on error:
#!/bin/ksh
cat > /etc/rc3.d/S90scale <<%EOF%
exec > /var/tmp/S90scale.log 2>&1 # logs everything to file
set -xe # show commands and exits on error
/sbin/zpool set autoexpand=on rpool
/sbin/zpool online -e rpool c1d0
mv /etc/rc3.d/S90scale /etc/rc3.d/_S90scale
/sbin/shutdown -y -i6 -g0
%EOF%
chmod a+x /etc/rc3.d/S90scale
After the next reboot complete, you should have a look to the /var/tmp/S90scale.log file and possibly see an error message there.
I'm doing a script and I would like to add the following line
pre-up iptables-restore < /etc/iptables.rules
to the file interfaces which is located on /etc/network/interfaces,but although I have enabled the permissions to write in this file(I work in Ubuntu), I'm not able to do it... I'm trying to use the following command in my bash script
sudo echo "pre-up iptables-restore < /etc/iptables.rules" >> /etc/network/interfaces
Any suggestion of how to do it without using gedit o vi?
Thanks in advance!
You need to tell bash not to use redirection before it starts sudo:
sudo bash -c 'echo "pre-up iptables-restore < /etc/iptables.rules" >> /etc/network/interfaces'
this way the complete command will be executed with root access, not only the echo "pre-up iptables-restore < /etc/iptables.rules" part
Hi i have set up passwordless ssh set up and have perl call ssh -o "BatchMode yes" user#host "runMe.pl arg1 arg2"
runMe.pl calls matlab and the function run_online with the given args.
nohup matlab -nojvm -nodisplay -r "run_online('$imgfolder/$folder/', '$ARGV[0]$folder', '/homes/rbise/results/mitosis/$ARGV[0]/$folder/')" > out.txt < /dev/null &
for some reason matlab never starts running. why is this?
thanks
This is substantially a duplicate of the [perl] question that was asked immediately previously to this one -- at least, the answer is the same. You have no controlling terminal when you connect with ssh. Try ssh -o "BatchMode yes" user#host "bash -c runMe.pl arg1 arg2".