I am wishing to use watchman to rebuild my directory when a change happens.
I am using watchman--make command to initialize the command :
$ babel ./src/ -d ./lib/
Currently I am using:
$ watchman-make -p "./src/**" -r 'babel ./src/ -d ./lib/'
to watch any file change inside src and run the build command.
Watchman is outputting :
# Relative to /home/marc/workspace/abnb
# Changes to files matching ./src/** will execute `babel ./src/ -d ./lib/`
# waiting for changes
But nothing seems to happen when I change the files within my direcoty src/
You probably want to rewrite your pattern as src/**/*
Watchman name resolution doesn't (and won't) know to resolve . or .. in path names. The ** operator matches any directory depth, so you also should specify * to match any file in any directory under the src dir.
Related
I am using gsutil to upload a folder which contains symlinks, the problem is that some of these files are false symlinks ( Unfortunately, that's the case)
Here is an example of the command I am using:
gsutil -m cp -c -n -e -L output-upload.log -r output gs://my-storage
and I get the following:
[Errno 2] No such file or directory: 'output/1231/file.mp4'
CommandException: 1 file/object could not be transferred.
Is there a way to make gsutil skip this file or fail safely without stopping the upload ?
This was a bug in gsutil (which it looks like you reported here) and it will be fixed in gsutil 4.23.
I run wget to create a warc archive as follows:
$ wget --warc-file=/tmp/epfl --recursive --level=1 http://www.epfl.ch/
$ l -h /tmp/epfl.warc.gz
-rw-r--r-- 1 david wheel 657K Sep 2 15:18 /tmp/epfl.warc.gz
$ find .
./www.epfl.ch/index.html
./www.epfl.ch/public/hp2013/css/homepage.70a623197f74.css
[...]
I only need the epfl.warc.gz file. How do I prevent wget to creating all the individual files?
I tried as follows:
$ wget --warc-file=/tmp/epfl --recursive --level=1 --output-document=/dev/null http://www.epfl.ch/
ERROR: -k or -r can be used together with -O only if outputting to a regular file.
tl;dr Add the options --delete-after and --no-directories.
Option --delete-after instructs wget to delete each downloaded file immediately after its download is complete. As a consequence, the maximum disk usage during execution will be the size of the WARC file plus the size of the single largest downloaded file.
Option --no-directories prevents wget from leaving behind a useless tree of empty directories. By default wget creates a directory tree that mirrors the one on the host, and downloads each file into the appropriate directory of the mirrored tree. wget does this even when the downloaded file is temporary due to --delete-after. To prevent that, use option --no-directories.
The below demonstrates the result, using your given example (slightly altered).
$ cd $(mktemp -d)
$ wget --delete-after --no-directories \
--warc-file=epfl --recursive --level=1 http://www.epfl.ch/
...
Total wall clock time: 12s
Downloaded: 22 files, 1.4M in 5.9s (239 KB/s)
$ ls -lhA
-rw-rw-r--. 1 chadv chadv 1.5M Aug 31 07:55 epfl.warc
If you forget to use --no-directories, you can easily clean up the tree of empty directories with find -type d -delete.
For individual files (without --recursive) the option -O /dev/null will make wget not to create a file for the output. For recursive fetches /dev/null is not accepted (don't know why). But why not just write all the output concatenated into one single file via -O tmpfile and delete this file afterwards?
wget -r -np www.a.com/b/c/d
The above will create a directory called 'www.a.com' in the current working directory on my local computer, containing all subdirectories on the path to 'd'.
I only want directory 'd' (and its contents) created in my cwd.
How can I achieve this?
You can mention that directory name explicitely and avoid creation of sub-directories by the following line.
wget -nd -P /home/d www.a.com/b/c/d
The -nd will avoid creation of sub-directories and -P will set the directory to /home/d and all your files will be downloaded to "/home/d" folder only.
I have written a shell script to show me what files need patching without actually doing a cvs up. It's just a simple cvs stat parser.
But, it doesn't tell me if a new directory would come down with a new cvs up -d.
Is there a way to find out if I'm missing directories that would be created with the next cvs up -d?
The -n switch shows me missing directories:
cvs -n up -d
The -n switch does not actually run cvs up -d; it just shows the effects of running the command.
I need files to be downloaded to /tmp/cron_test/. My wget code is
wget --random-wait -r -p -nd -e robots=off -A".pdf" -U mozilla http://math.stanford.edu/undergrad/
So is there some parameter to specify the directory?
From the manual page:
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the
directory where all other files and sub-directories will be
saved to, i.e. the top of the retrieval tree. The default
is . (the current directory).
So you need to add -P /tmp/cron_test/ (short form) or --directory-prefix=/tmp/cron_test/ (long form) to your command. Also note that if the directory does not exist it will get created.
-O is the option to specify the path of the file you want to download to:
wget <uri> -O /path/to/file.ext
-P is prefix where it will download the file in the directory:
wget <uri> -P /path/to/folder
Make sure you have the URL correct for whatever you are downloading. First of all, URLs with characters like ? and such cannot be parsed and resolved. This will confuse the cmd line and accept any characters that aren't resolved into the source URL name as the file name you are downloading into.
For example:
wget "sourceforge.net/projects/ebosse/files/latest/download?source=typ_redirect"
will download into a file named, ?source=typ_redirect.
As you can see, knowing a thing or two about URLs helps to understand wget.
I am booting from a hirens disk and only had Linux 2.6.1 as a resource (import os is unavailable). The correct syntax that solved my problem downloading an ISO onto the physical hard drive was:
wget "(source url)" -O (directory where HD was mounted)/isofile.iso"
One could figure the correct URL by finding at what point wget downloads into a file named index.html (the default file), and has the correct size/other attributes of the file you need shown by the following command:
wget "(source url)"
Once that URL and source file is correct and it is downloading into index.html, you can stop the download (ctrl + z) and change the output file by using:
-O "<specified download directory>/filename.extension"
after the source url.
In my case this results in downloading an ISO and storing it as a binary file under isofile.iso, which hopefully mounts.
"-P" is the right option, please read on for more related information:
wget -nd -np -P /dest/dir --recursive http://url/dir1/dir2
Relevant snippets from man pages for convenience:
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory).
-nd
--no-directories
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the
filenames will get extensions .n).
-np
--no-parent
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
man wget:
-O file
--output-document=file
wget "url" -O /tmp/cron_test/<file>