I have some scripts that I run using jboss-cli -c --controller=... --file=myscript.cli.
The -c and --controller options are great, because my script does not know which server it should be run against and can be reused for multiple servers.
I now want to use the offline-cli feature to avoid port conflicts and prevent servers from beeing reachable through the network while they are beeing set up.
My issue is now that in order to start an embedded server I have to use the CLI-command embed-server, but I don't want to add that command to my scripts, because the scripts are not supposed to know the name of the server config xml file.
Unfortunately I can't use both --command="embed-server --server-config=my-standalone.xml" and --file=myscript.cli at the same time, because the CLI complains with:
Only one of '--file', '--commands' or '--command' can appear as the argument at a time.
Another thing I tried was: --commands="embed-server --server-config=my-standalone.xml,run-batch --file=\"myscript.cli\" but this does not work either, because my scripts contain some if-else logic, for instance:
if (outcome == success) of /subsystem=iiop-openjdk:read-resource()
/subsystem=iiop-openjdk:remove()
end-if
And unfortunately conditional logic is not supported in batch mode (see https://bugzilla.redhat.com/show_bug.cgi?id=1083176).
the simple way is to start your embedded server in your script :
embed-server --std-out=echo --server-config=standalone-full.xml
/subsystem=messaging-activemq/server=default/jms-queue=inQueue:add(durable=true, entries=["/queue/inQueue","java:jboss/exported/queue/inQueue"])
/subsystem=messaging-activemq/server=default/jms-queue=outQueue:add(durable=true, entries=["/queue/outQueue","java:jboss/exported/queue/outQueue"])
quit
Don't forget to quit at the end of your cli script :)
If you are using a Unix system you may try something like this:
(echo embed-server --std-out=echo --server-config=my-standalone.xml; cat myscript.cli) | jboss-cli.sh
Related
I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.
Is it possible to run perl script, which is located on a remote server, on that server from Windows? There is a job on a remote server that I want to get done every time I make something on Windows.
You have to have something listening for an instruction to run the script, and then you have to send the instruction.
There are lots of approaches you could take to that, including:
Running an SSH server and then connecting to it from an ssh client on the windows machine
Running an HTTP server, running the script through FastCGI, and then requesting the URL for it from curl or a browser on the Windows machine
Writing a custom protocol, listening on a socket, and then writing a custom client that you run on the Windows machine
Absolutely.
You can use plink to run commands on the server from Windows, assuming the server is running sshd.
plink user#a.domain.ext echo hi
This will print "hi\n" to the standard output.
Substitute /path/to/perl/script for echo above and substitute hi with any command line argument that the script needs.
plink is available here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
One cautionary personal note from doing this many times is that the environment in which the perl script will be run is much less complete than what you would experience when logging in via a full SSH session and running the command interactively. Many environment variables you would normally expect are unset.
For instance using "set | wc -l" in the command above produces only 39 environment variables defined, but from an interactive SSH session, there are 57 environment variables defined. You have to make sure your perl script isn't depending on an environment variable that hasn't been set. For instance, you may need to use full paths for any modules that it uses, or by using the -I flag in the shebang line, because #INC may not be what you expect it to be.
I am facing a problem in DB2. In my Oracle environment it was very easy for me to include multiple scripts in one master script, which were executed sequentially. e.g.:
Master.sql:
connect ....
#script1.sql
#script2.sql
Now I have to build up same logic in DB2 LUW. Is there a simple way to include multiple scripts in one master script? I would like to have one single db2 call from shell, which executes the master script and within all subscripts.
Regards
Jan
There is notrhing to stop you from creating a single file with multiple sql batches. In the Windows world, it would look like this:
Note: First you initialize the db2 command prompt.
db2cmd -c -w -i %1.bat
With as many of these as you want in the .bat file:
db2 -txf c:\Example\db2html.sql
In Linux, the db2clp is included in the shell once you load the db2profile ('. /home/db2inst1/sqllib/db2profile). In windows, you need to call db2cmd in order to use db2clp.
With a interactive db2clp, you cannot call db2 scripts via #scriptX, however, you can call them from the shell like
db2 -tvf script
However, if you use the CLP*Plus you can do almost everything you do in SQL*Plus. For more information: https://www.ibm.com/developerworks/community/blogs/IMSupport/entry/tech_tip_db2_s_new_clp_plus_utility?lang=en
I am working on an application that needs to send commands to remote servers. Sending commands is easy enough with the plethora of SSH client libraries.
However, I would like shell state (i.e. current working directory, environment variables, etc) preserved between each command. All client libraries that I have seen do not do this. For example, the following is an example of code that does not do what I want:
use Net::SSH::Perl;
my $server = Net::SSH::Perl->new($host);
$server->login($user, $pass);
$server->cmd('cd /var');
$server->cmd('pwd'); # I _would like_ this to output /var
There will be other tasks performed between sending commands, so combining the commands like $server->cmd('cd /var; pwd') is not acceptable.
Net::SSH::Expect does what you want, though the "Expect" way is not completely reliable as it will be parsing the output of your commands and trying to detect when the shell prompt appears again.
I'm not sure what you are doing exactly, but you could just start one SSH session. If you really can't do this, maybe you can just use absolute paths for everything.
Is it possible to have a Perl script run shell aliases? I am running into a situation where we've got a Perl module I don't have access to modify and one of the things it does is logs into multiple servers via SSH to run some commands remotely. Sadly some of the systems (which I also don't have access to modify) have a buggy SSH server that will disconnect as soon as my system tries to send an SSH public key. I have the SSH agent running because I need it to connect to some other servers.
My initial solution was to set up an alias to set ssh to ssh -o PubkeyAuthentication=no, but Perl runs the ssh binary it finds in the PATH instead of trying to use the alias.
It looks like the only solutions are disable the SSH agent while I am connecting to the problem servers or override the Perl module that does the actual connection.
Perhaps you could put a command called ssh in PATH ahead of the ssh which runs ssh as you want it to be run.
Alter the PATH before you run the perl script, or use this in your .ssh/config
Host *
PubkeyAuthentication no
Why don't you skip the alias and just create a shell script called ssh in a directory somewhere, then change the path to put that directory before the one containing the real ssh?
I had to do this recently with iostat because the new version output a different format that a third-party product couldn't handle (it scanned the output to generate a report).
I just created an iostat shell script which called the real iostat (with hardcoded path, but you could be more sophisticated), passing the output through an awk script to massage it into the original format. Then, I changed the path for the third-party program and it started working fine.
You could declare a function in .bashrc (or .profile or whatever) with that name. It could look like this (might break):
function ssh {
/usr/bin/ssh -o PubkeyAuthentication=no "$#"
}
But using a config file might be the best solution in your case.