I'm running a JMeter test plan from command line and it's currently outputting something along the lines of:
Created the tree successfully using C:\*****\TestPlan.jmx
Starting the test # Thu Oct 11 10:20:43 EDT 2012 (1349965243947)
Waiting for possible shutdown message on port 4445
Tidying up ... # Thu Oct 11 10:20:46 EDT 2012 (1349965246384)
... end of run
Is there any way to turn off this output and have the plan execute 'silently'?
Found a way to do this, by following this article http://www.robvanderwoude.com/battech_redirection.php
and appending > NUL to the command
jmeter -n -t C:\***\TestPlan.jmx -Jhostname=%1 > NUL
Related
I'm trying to run a peak calling tool within a conda environment using snakemake.
The script looks as such (I only added the rows connect to the problem):
rule all:
input:
expand('{project}/{organism}/{mapper}/seacr/{pattern}.auc.threshold.bed', pattern = PATTERN, sample = IDS, organism = config['org'], project = config['project'], mapper = config['mapper']) # SEACR - run the peak calling
rule seacr_run:
input:
IP = '{project}/{organism}/{mapper}/seacr/IP_{PATTERN}.bedgraph',
IgG = '{project}/{organism}/{mapper}/seacr/IgG_{PATTERN}.bedgraph',
output:
bed1 = '{project}/{organism}/{mapper}/seacr/{PATTERN}.auc.threshold.bed',
shell:
'''
bash /fs/home/yeroslaviz/SEACR/SEACR_1.3.sh {input.IP} 0.01 non stringent {output.bed1}
'''
When running the -nps dryrun of the snamemake command I get the correct command printed to STDOUT
> snakemake -nps /fs/pool/pool-bcfngs/scripts/P193.ChipSeq.Snakemake -j 100
...
Building DAG of jobs...
Job counts:
count jobs
1 all
1 seacr_run
2
[Tue Mar 3 13:56:19 2020]
rule seacr_run:
input: P193/Mmu.GrCm38/bowtie2/seacr/IP_H3K4m3.bedgraph, P193/Mmu.GrCm38/bowtie2/seacr/IgG_H3K4m3.bedgraph
output: P193/Mmu.GrCm38/bowtie2/seacr/H3K4m3.auc.threshold.bed
jobid: 22
wildcards: project=P193, organism=Mmu.GrCm38, mapper=bowtie2, PATTERN=H3K4m3
bash /fs/home/yeroslaviz/SEACR/SEACR_1.3.sh P193/Mmu.GrCm38/bowtie2/seacr/IP_H3K4m3.bedgraph 0.01 non stringent P193/Mmu.GrCm38/bowtie2/seacr/H3K4m3.auc.threshold.bed
[Tue Mar 3 13:56:19 2020]
localrule all:
...
Job counts:
count jobs
1 all
1 seacr_run
2
This was a dry-run (flag -n). The order of jobs does not reflect the order of execution.
When running the command above in the command line the tool works without problems. But hwhen I try to run it within the snakemake workflow I get the following error:
Waiting at most 5 seconds for missing files.
MissingOutputException in line 67 of /fs/pool/pool-bcfngs/scripts/P193.ChipSeq.Snakemake:
Missing files after 5 seconds:
P193/Mmu.GrCm38/bowtie2/seacr/H3K4m3.auc.threshold.bed
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Can anyone explain what is happening?
Thanks
I have my bash script which I set my service to run ExecStart on - now my bash script with run directly via the user 'staytus' starts and stops things as expected but for some reason that I do not under stand yet when I run it via systemctl it throws errors!
Now since it works fine running as the same user I have the service set to use that kinda tells me the problem is with the startup file.
[Unit]
Description=Starts up procodile which runs staytus
[Service]
User=staytus
Type=simple
ExecStart=/usr/bin/startup/start.sh
Restart=on-abort
[Install]
WantedBy=multi-user.target
I've tried adding a working directory, changing the user etc all with no luck - any other suggestions of what to try?
Oct 12 15:36:52 system-name start.sh: /usr/local/bin/procodile: line 10: require: command not found
Oct 12 15:36:52 system-name start.sh: /usr/local/bin/procodile: line 12: version: command not found
Oct 12 15:36:52 system-name start.sh: /usr/local/bin/procodile: line 16: syntax error near unexpected token `('
Oct 12 15:36:52 system-name start.sh: /usr/local/bin/procodile: line 16: ` str = str.dup.force_encoding("BINARY") if str.respond_to? :force_encoding'
Oct 12 15:36:52 system-name systemd: status.service: main process exited, code=exited, status=2/INVALIDARGUMENT
On Systemd the environment variables for a process run by ExecStart is not the same as on the user terminal.
See https://www.freedesktop.org/software/systemd/man/systemd.exec.html#Environment%20variables%20in%20spawned%20processes
You'd have to check the environment variables where it runs ok (maybe using set command on the terminal) and add the needed ones on the systemd service definition using Environment="VAR1=VALUE1" "VAR2=VALUE2". See https://www.freedesktop.org/software/systemd/man/systemd.exec.html#Environment
I just encountered some strange behavior with Perl 5.16.3 on FreeBSD 9.3-RELEASE-p3. We've got a cron job which runs every five minutes and generates some text status files. I just happened to list the contents of the output directory and saw that the timestamps for some of the files were in the future! The files are created like this:
if (open(OUT, "> $status_file_path")) {
print OUT "$status_info\n";
close OUT;
}
Now, the file handle OUT is used in several places, however it is opened and closed within the same block as shown above. And like I said, out of ten files, only a few had future dates when displayed using ls.
For example, files with the current date had timestamps like 04/02/2015 20:29:46, files with future timestamps were out in November, e.g. 11/10/2015 09:38:41.
What might be going on here?
EDIT
I've got two tests running:
1) a perl script running a loop of 1000 iterations, sleeping a random time up to 10 seconds between iterations, using the open/print/close logic to create an output file and abort the script if the file's modification time is in the future.
2) a cron entry to touch a test file every minute, e.g. touch /home/test/test_file_date_with_cron.txt
TEST RESULTS
Neither of the tests generated output files with a timestamp in the future.
This is scary.
EDIT 2
Here is the filesystem info, the files are written in the /usr directory.
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/gpt/gprootfs 2G 133M 1.7G 7% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/gpt/gpusrfs 431G 3.8G 392G 1% /usr
procfs 4.0k 4.0k 0B 100% /proc
EDIT 3
Running the script outside of cron for several hundred iterations didn't duplicate the problem. HOWEVER, I just found some other files, which are created by a CGI script which have the future dates:
-rw-r--r-- 1 test test 5783 Nov 10 2015 Config.xml_20150210_104151
-rw-r--r-- 1 test test 34548 Nov 10 2015 Config2.xml_20150210_104151
-rw-r--r-- 1 test test 6105 Nov 10 2015 Config.xml_20151109_232210
-rw-r--r-- 1 test test 34554 Nov 10 2015 Config2.xml_20151109_232210
-rw-rw-r-- 1 root test 2075 Nov 9 2015 Config.xml_20151109_231055
-rw-rw-r-- 1 root test 1232 Nov 9 2015 Config2.xml_20151109_231055
These are archive files, which get moved and renamed with the file's mtime timestamp. Note that BOTH ls and Perl's stat() function report the future date -- stat() is used to generate the file's timestamp portion of the name.
Looking at the first entry, ls reports "Nov 10 2015", whereas when the CGI script processed it, Perl's stat() reported "20150210_104151", i.e. "Feb 02 2015" which is most likely correct.
Further down, we see ls showing "Nov 10 2015" and stat() reported "20151109_232210", i.e. "Nov 09 2015".
Finding those additional archived config files helped me track down the cause, which was as others have suggested, that the system date and timezone changed.
From: 1447147328 and America/Adak
To: 1426637771 and America/New_York
What was throwing me off, was that I thought the cron script wrote ALL of the output files each time it executes, but that's not the case. The files have different "refresh intervals".
I followed some online guides trying to get some headless VMs to start/suspend automatically at boot/shutdown on my Mac. I can't get it to work at all. This is my first time trying to get script runs on Startup/Shutdown, so it could be that I'm just missing something very basic and if that's the case I apologize.
These are the steps I followed:
Created a directory /Library/StartupItems/HeadlessVM
Created two files within that directory:
-rwxr--r-- 1 root wheel 242 Feb 19 19:05 HeadlessVM
-rw-r--r-- 1 root wheel 188 Feb 20 12:42 StartupParameters.plist
Contents for HeadlessVM
$ cat HeadlessVM
#!/bin/sh
. /etc/rc.common
StartService ()
{
ConsoleMessage "Starting HeadlessVM"
/usr/local/bin/runvmheadless
}
StopService ()
{
ConsoleMessage "Suspending HeadlessVM"
/usr/local/bin/suspendvmheadless
}
RunService "$1"
Contents for StartupParameters.plist
$ cat StartupParameters.plist
{
Description = "Runs/Suspends Virtual Machine Headless on OS X Startup/Shutdown";
Provides = ("HeadlessVM");
Uses = ("Disks");
OrderPreference = ("Late");
}
Created my script files, that will perform both actions:
-rwxr-xr-x# 1 xxxxxxx admin 164 Feb 19 01:06 runvmheadless
-rwxr-xr-x# 1 xxxxxxx admin 160 Feb 19 01:19 suspendvmheadless
Contents for runvmheadless
$ cat runvmheadless
#!/bin/bash
"/Applications/VMware Fusion.app/Contents/Library/vmrun" -T fusion start "/Volumes/Archive/Virtual Machines/vm.vmwarevm/vm.vmx" nogui
Contents for suspendvmheadless
$ cat suspendvmheadless
#!/bin/bash
"/Applications/VMware Fusion.app/Contents/Library/vmrun" -T fusion suspend "/Volumes/StaticData/Virtual Machines/vm.vmwarevm/vm.vmx"
My troubleshooting so far:
If I run the scripts from the terminal, they work as intended.
If I run sudo /sbin/SystemStarter (start or stop) "HeadlessVM" it also works.
On console I only see the following when I reboot, no clue on what's wrong tho.
2/20/12 12:11:09.128 PM SystemStarter: Runs/Suspends Virtual Machine Headless on OS X Startup/Shutdown (100) did not complete successfully
Appreciate any help, Thank you.
I found what was wrong. The code above is fine, the problem is that my scripts are trying to get data from an encrypted secondary disk which was not available at boot time.
I used this in order to bypass this problem:https://github.com/jridgewell/Unlock
Thanks
For example, if I have two files:
file1:
This is file 1
and file2:
This is file 2
and create patch with the following command:
diff -u file1 file2 > files.patch
result is:
--- file1 Fri Aug 13 17:53:28 2010
+++ file2 Fri Aug 13 17:53:38 2010
## -1,1 +1,1 ##
-This is file 1
+This is file 2
Then if I try to apply this patch on Solaris with patch command:
patch -u -i files.patch
it hangs on:
Looks like a unified context diff.
File to patch:
1. Is there a way to use Solaris native patch command with unified diffs?
2. Which diff format is considered most portable if it's not possible to apply unified format?
Update:
I've found answer on the first part of my question. Seems that patch on Solaris hangs if the second file (file2 in this case) exists in the same folder as the first one (file1). For example, the following quite common diff:
--- a/src/file.src Sat Aug 14 23:07:29 2010
+++ b/src/file.src Sat Aug 14 23:07:37 2010
## -1,2 +1,1 ##
-1
-
+2
will not work with quite common patch command:
patch -p1 -u -d a < file.patch
while the following diff (note second file is renamed):
--- a/src/file.src Sat Aug 14 23:07:29 2010
+++ b/src/file_new.src Sat Aug 14 23:07:37 2010
## -1,2 +1,1 ##
-1
-
+2
will work perfectly.
For the second part of my question see accepted answer below.
On Solaris /usr/bin/patch is an old version required to comply with some ancient standards.
A modern version of GNU patch is provided as /usr/bin/gpatch on Solaris 8 and later.
diff -cr old.new new.txt > patch.txt
gpatch -p0 < patch.txt
Works perfectly for me (using gpatch)
Single Unix v2 and v3 both support context diffs but not unified diffs, so for better portability you should use context diffs (-c option to diff and patch).
On older Solaris releases (pre-10, I think), you need to make sure that /usr/xpg4/bin is before /usr/bin in your $PATH, otherwise you may get compatibility versions of some utilities instead of standard ones.