Check out this little watch.sh script
coffee -j main.js -cw javascripts/*.coffee
coffee -j controllers.js -cw javascripts/controllers/*.coffee
That will only actually watch the first folder (javascripts/*.coffee).
How can I watch both, and compile the results into two different .js-files? (main.js and controllers.js)
Desired result:
All .coffee files in javascripts/ should be compiled into main.js
All .coffee files in javascripts/controllers should be compiled into controllers.js
Edit:
I've solved this by creating a simple executable that does this:
coffee -j wingme.js -cw javascripts/*.coffee &
coffee -j controllers.js -cw javascripts/controllers/*.coffee &
It will watch both folders in the background. Please let me know if you've got a better approach!
You could create a simple executable that watches both directories.
coffee -j wingme.js -cw javascripts/*.coffee &
coffee -j controllers.js -cw javascripts/controllers/*.coffee &
It will watch both folders in the background.
Related
Imagine I have a lsf file as fllows:
#!/bin/sh
#BSUB -J X
#BSUB -o X.out
#BSUB -e X.err
...
Once it is run the output will appear in the current folder.
Now imagine I am in
~/code
I need the files to appear in
../cluster/
basically go one folder back and from there go to folder cluster.
How should I write do it within the lsf file?
You can put any relative or absolute path in #BSUB -[eo] <file>. e.g. #BSUB -e ../cluster/X.err. If using a relative path, its relative to the job CWD. By default the job CWD is the job submission directory, but can be changed by a bunch of different parameters. bjobs -l <jobid> shows the actual CWD.
What happens is that while the job is running, the stdout and stderr goes to a file under LSF_TMPDIR (default is $HOME/.lsbatch). After the job finishes, the contents of those files is copied to the pathnames specified in -[eo]. The copying is done on the execution host.
I'm looking for a way to log information to a file about a submitted job immediately after it starts.
Normally all the job status is appended to the log file after a job has completed, but I'd like to know the information it has when it starts.
I know there's the -B flag but I want it in a file, and I could also do something like:
bsub -J jobby -o run_job.log bjobs -l -J jobby > jobby.log; run_job
but maybe someone knows of a funkier way of doing this.
There are some subtle variations that essentially accomplish the same thing:
You can use a pre-exec to do a similar thing instead of doing the
bjobs as part of the command:
bsub -J jobby -E "bjobs -l -J jobby > jobby.log" run_job
You can use the job's environment to get your own jobid instead of
using -J if you write your submission as a script:
#!/bin/sh
#BSUB -o run_job.log
bjobs -l $LSB_JOBID > $LSB_JOBID.log
run_job
Then submit your job like this:
bsub < jobscript.sh
You can do some combination of the above: use $LSB_JOBID in a
pre-execution script.
That's about as 'funky' as it gets AFAIK :)
I need to compile a matlab m-file , file.m .
I also want to add some helper files and shared resources which are in folders
c:\tt\folder1\
c:\tt\folder2\
I can easily do this using the deploytool option in matlab. But I want to be able to do this using the matlab commandline. After some searches, I found the following code
mcc -m file.m -I C:\tt\folder1 -I C:\tt\folder2
but this is not doing anything. Matlab just goes into 'busy mode'.
Can anyone tell me what I am doing wrong??...
mcc -m file.m -a C:\tt\folder1 -a C:\tt\folder2
Wondering what is the -j option mean in the zip command. I found the explanation as following:
-j
Store just the name of a saved file (junk the path), and do not store directory names. By default, zip will store the full path (relative to the current path).
But not quite sure what it is exact mean? Can anyone explain it using the following command as an example?
C:\programs\zip -j myzipfile file1 file2 file3
Thank you.
This will make more sense with a different example:
C:\programs\zip myzipfile a/file1 b/file2 c/file3
Normally this would result in a zip containing three "subdirs":
a/
+ file1
b/
+ file2
c/
+ file3
With -j, you get:
./
+ file1
+ file2
+ file3
in that case it won't do anything special.
but if, for example you type
C:\programs\zip -j myzipfile directory1
and directory1 contains subdirectories, all the files you zip, when extracted, will be put in the same directory, regardless what subdirectory they were in originally.
With the Linux zip command, if you use the -j option with the -i option, the -j may need to be after the -i. Below, the -r means recursive from 'directory1':
C:\programs\zip -r myzipfile.zip directory1 -i subDirectoryA/*.txt -j
If the -j is earlier in the command, the resulting zip file may be empty.
-j is "Junk pathnames"
I'm trying to write a Makefile which should download some sources if and only if they are missing.
Something like:
hello: hello.c
gcc -o hello hello.c
hello.c:
wget -O hello.c http://example.org/hello.c
But of course this causes hello.c to be downloaded every time make command is run. I would like hello.c to be downloaded by this Makefile only if it is missing. Is this possible with GNU make and how to do this if it is?
My guess is that wget doesn't update the timestamp on hello.c, but retains the remote timestamp. This causes make to believe that hello.c is old and attempts to download it again. Try
hello.c:
wget ...
touch $#
EDIT: The -N option to wget will prevent wget from downloading anything unless the remote file is newer (but it'll still check the timestamp of the remote file, of course.)
Since the Makefile should be working as you want, you need to check a few unlikely cases:
1) Check that you don't have any .PHONY rules mentioning the source file.
2) Check that the source target name matches the file path you are downloading.
You could also try running make -d to see why make thinks it needs to 're-build' the source file.
The Makefile you wrote downloads hello.c only if it's missing. Perhaps you are doing something else wrong? See for example:
hello: hello.c
gcc -o hello hello.c
hello.c:
echo 'int main() {}' > hello.c
And:
% make
echo 'int main() {}' > hello.c
gcc -o hello hello.c
% rm hello
% make
gcc -o hello hello.c
% rm hello*
% make
echo 'int main() {}' > hello.c
gcc -o hello hello.c
(the echo command was not executed the second time)
If the prerequisite for hello.c has changed or is empty and Make continues to download the file when it exists, then one option to prevent Make from re-downloading the file is to use a flag in the body of the target to detect if the file exists:
hello.c:
test -f $# || wget -O hello.c http://example.org/hello.c
The test command will return true if the hello.c file exists, otherwise it will return false and the wget command will run.