Substitute user with long command doesn't work - command

I'm having trouble to start a service as a specific user (under Ubuntu 14.4) and I'm unsure what the problem is. I use the following command to autostart a jar-file on startup:
nohup ${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF} 2>> /dev/null &
That works perfectly, therefore there is no problem with the variables and so on. Well, this script get's executed by the actual user, which is in this case, the root. Since I don't want to take any risks, I do want to execute it as a specific (already existing) user. Normally my approach would be to change the to command to:
nohup su some_user -c "${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF}"
But this doesn't work. I don't get any error messages (of course I left out the redirection of stderr for test purposes) and the nohup.out is empty.
I already have tried different versions, e.g. replacing the double quotes with single quotes and masking the "$" inside the command. According to this thread it should work with the syntax.
None of the solutions in that thread do work. E.g.
su some_user -c "nohup ${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF}" -> doesn't work
nohup runuser some_user c "nohup ${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF}"-> doesn't work (the runuser commands doesn't exist).
What do I miss?
Any help is very appreciated!

Related

What results should I see from running initdb?

Running initdb looks pretty straightforward from the docs.
I created the data directory, checked the permisions on the folder, ran initdb as the postgres user, and entered the password.
It returns immediately.
C:\Program Files\PostgreSQL\12>runas /user:pgUser#domain "bin\initdb.exe -k -D \"C:\Program Files\PostgreSQL\12\data\""
Enter the password for pgUser#domain:
Attempting to start bin\initdb.exe -k -D "C:\Program Files\PostgreSQL\12\data" as user "pgUser#domain" ...
C:\Program Files\PostgreSQL\12>
Results:
The data dir is still empty, no errors in the event log, and the service won't start.
I expected it to populate data with the base directories, create the postgres and template databases, and be able to start the database engine as a service.
The resolution was three-fold.
First, as suspected, runas starts another process to run the command.
Redirecting, as such
runas /user:pgUser#domain "cmd" > output.txt
only redirects the output of runas.
To capture the output of the cmd, you need to redirect inside that process.
runas /user:pgUser#domain "cmd > output.txt 2>&1"
Second, installing postgres on windows includes initdb.
So, uninstalling and re-installing accomplished that.
Third, one of the things that I wanted to accomplish with the reset was enabling checksums and something I read said that you could only set that flag using initdb. But that's not true. You can enable checksums on an existing cluster.
So I didn't need to run initdb at all. In spite of so many recommendations to start with a freshly "initdb'd" installation.

SFTP from web service through Cygwin fails

I have a web page running on Apache which uses a matured set of Perl files for monitoring our workplace servers and applications. One of those tests goes through Cygwin´s SFTP, list files there and assess them.
The problem I have is with SFTP itself - when I run part of test either manually from cmd as D:\cygwin\bin\bash.exe -c "/usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]" or invoke the very same set of Perl files as web it works OK (returns list of files as it should). When exactly same code is run through web page it fails quick and does not tell anything. Only thing I have is error code 255 and "Connection closed". No error stream, no verbose output, nothing, no matter what way to capture any error I have used.
To cut long story short, the culprit was HOME path.
When run manually either directly from cmd or through Perl, the D:\cygwin\bin\bash.exe -c "env" would report HOME as HOME=/cygdrive/c/Users/[username]/ BUT this same command when run through web page reports HOME=/ i.e. root, apparently loosing the home somewhere along the path.
With this knowledge the solution is simple: prepend SFTP command with proper home path (e.g. D:\cygwin\bin\bash.exe -c "export HOME=/cygdrive/c/Users/%USERNAME%/ ; /usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]") and you are good to go.

Adapt perl script to run from cgi

I have a perl script which works fine from shell but doesn't work from web (lighttpd + mod_cgi). I found out that problem is with the following string
my $lastupdate = `/opt/mongo/bin/mongo 127.0.0.1:27117/getVersion -u test -p test --eval 'db.polling.find({},{"_id":0,"host":0,"ports":0}).sort({"date":-1}).limit(1).forEach(function(x){printjson(x)})' | awk -F'"' '/date/{print \$4}' |sed 's/T/,/;s/Z//'`;
As i understood, when running from cgi, string is not being splitted. So i have done this by my own
my $lastupdate = system('/opt/mongo/bin/mongo', '127.0.0.1:27117/getVersion', '-u', 'test', '-p', 'test', '--eval', 'db.polling.find({},{"_id":0,"host":0,"ports":0}).sort({"date":-1}).limit(1).forEach(function(x){printjson(x)})', '|', 'awk', '-F', '"', '/date/{print', '\$4}', '|sed', 's/T/,/;s/Z//');
Script works now but gives me unexpected value (differs from shell's run value).
What did i miss?
P.S. I know that there are smarter ways to interact mongoDB from perl, but my env is totally firewalled. I have access neither to CPAN, nor to rh repos and perl mongoDB driver has too much deps to install it manually.
The environment that you run a program under from a shell is completely different to the environment that the same program gets when run from a web server. Most obviously, it will be run as a different user - one who will have far more restricted filesystem permissions that the average user.
You can (partly) simulate this by working out which user your web server runs as (perhaps apache, www or nobody) and using sudo to run your program as that user. This might well reveal what the problem is.
You can't just switch from backticks to system(). Backticks return the output from running the command line and system() returns a value which requires some interpretation. That'll be why you're seeing a different result.

Can I automate Windbg to attach to a process and set breakpoints?

I am debugging a windows process that crashes if the execution stops for even a few milliseconds. (I don't know how many exactly, but it is definitely less than the time taken by my reflexes to resume the process.)
I know I can launch a windbg session via the command prompt by typing in windbg -p PID which brings up the GUI. But can I then further pass it windbg commands via the command prompt, such as bm *foo!bar* ".frame;gc";g".
Because If I can pass it these commands I can write these down in a .bat file and just run it. There would at least be no delay due to entering (or even copy-pasting) the commands.
Use the -c parameter to pass them:
windbg -p PID -c "bm *foo!bar* .frame;gc;g"
According to the help (found by running windbg /?):
-c "command"
Specifies the initial debugger command to run at start-up. This command must be enclosed in quotation marks. Multiple commands can be separated with semicolons. (If you have a long command list, it may be easier to put them in a script and then use the -c option with the $<, $><, $><, $$>< (Run Script File) command.)
If you are starting a debugging client, this command must be intended for the debugging server. Client-specific commands, such as .lsrcpath, are not allowed.
You may need to play around with the quotes...
Edit: Based on this answer, you might be better off putting the commands into a script file to deal with the quotes:
script.txt (I think this is what you want):
bm *foo!bar* ".frame;gc"
g
Then in your batch file:
windbg -p PID -c "$<full_path_to_script_txt"

Use stdout/stderr from remote (detached) command execution in screen without file access

I am running isolated bash consoles (in the context of Linux containers/network name spaces) in separate GNU screen sessions on a Linux machine.
I am able to remotely execute commands on these consoles using ssh and screen functionality, as discussed in several other threads, using:
ssh <hostname> screen -S <sessionname> -X <cmd>
I can also fetch the output from running the above command relying on either the hardcopy-functionality (screen -S <sessionname> -X hardcopy) or the logging-functionality (screen -S <sessionname> -l), however these all require file access. Similar things happen when the output is redirected to a logfile (using for example "> logfile.txt"), etc.
Is there a way to avoid file access in redirecting the output of the executed command? This would reduce the file access stress on the executing machine. I would like to redirect stdout/stderr data running from within the screen session to the calling environment, such that the output is returned on the screen when executing ssh <hostname> screen -S <sessionname> ... <magiccommand>.
Any suggestions?