Why is file downloaded with wget jumbled up? - wget

Why is the file saved with the command:
wget http://cdn.jquerytools.org/1.2.7/full/jquery.tools.min.js
all jumbled up? It appears fine in the browser.

Its gzipped. Try this:
$ mv jquery.tools.min.js jquery.tools.min.js.gz
$ gunzip jquery.tools.min.js.gz
$ cat jquery.tools.min.js

I'm not sure how long this has existed, but on wget version 1.14 you are able to:
wget --no-warc-compression http://cdn.jquerytools.org/1.2.7/full/jquery.tools.min.js

Related

Installing MongoDB 2022

I downloaded mongoDB and I try to use brew, it didn't work.
I try bunch of commands such as:
$ curl -O https://fastdl.mongodb.org/osx/mongodb-osx-x86_64-3.4.6.tgz
$ tar -zxvf mongodb-osx-x86_64-3.4.6.tgz
$ mkdir -p mongodb
$ cp -R -n mongodb-osx-x86_64-3.4.6/ mongodb
$ sudo mv mongodb /usr/local
Didn't work
Step 5: it says the directory is not empty or is not exist. I try to empty the directory didn't work and I try to create a different one, it didn't work.
I can't find any solution. Can someone help me, please?
I think your /usr/local folder already contains a non-empty folder named mongodb.
Refer this for details.
You can confirm it by listing out the files in it
ls /usr/local/mongodb
Maybe, you can try removing that directory as a superuser if it doesn't have any important files and continue with the installation

How do I wget a page from archive.org without the directory?

I'm trying to download a webpage from archive.org (ie http://wayback.archive.org/web/20110410223952id_/http://www.goldalert.com/gold-price-hovers-at-1460-as-ecb-hikes-rates-2/ ) with wget. I want to download it in /00001/index.html. How would I go about doing this?
I tried wget -p -k http://wayback.archive.org/web/20110410223952id_/http://www.goldalert.com/gold-price-hovers-at-1460-as-ecb-hikes-rates-2/ -O 00001/index.html but that didn't work. I than cd into the directory and removed the 00001 from the O flag. It didn't work either. I than just removed the -O flag. It worked but I get the whole archive.org directory (ie wayback.archive.org new directory web new diretory etc...) and the filename's not changed :(
What do I do?
Sorry for the obviously noob question.
wget http://wayback.archive.org/web/20110410223952id_/http://www.goldalert.com/gold-price-hovers-at-1460-as-ecb-hikes-rates-2/ -O 00001/index.html
Solve my own question. So simple.

wget corrupted file .zip file error

I am using wget to try and download two .zip files (SWVF_1_44.zip and SWVF_44_88.zip) from this site: http://www2.sos.state.oh.us/pls/voter/f?p=111:1:0::NO:RP:P1_TYPE:STATE
when I run:
wget -r -l1 -H -t1 -nd -N -np -A.zip -erobots=off "http://www2.sos.state.oh.us/pls/voter/f?p=111:1:0::NO:RP:P1_TYPE:STATE/SWVF_1_44.zip"
I get a downloaded zip file that has a screwed up name (f#p=111%3A1%3A0%3A%3ANO%3ARP%3AP1_TYPE%3ASTATE%2FSWVF_1_44) and it cannot be opened.
Any thoughts on where my code is wrong?
There's nothing "wrong" with your code. Wget is simply assuming you want to save the file in the same name that appears in the url. Use the -O option to specify an output file:
wget blahblahblah -O useablefilename.zip

WGET problem downloading pdfs from website

I'm trying to download all of the pdfs and ppts from this website: http://mlss2011.comp.nus.edu.sg/index.php?n=Site.Slides
I do in Cygwin:
wget --no-parent -A.pdf,.pptx -r -l1 http://mlss2011.comp.nus.edu.sg/index.php?n=Site.Slides
but no files are downloaded.
What do I need to change in the above wget command for this to work?
needed to use the -e robots=off code, so this worked
wget -e robots=off -A.pdf,.pptx -r -l1 http://mlss2011.comp.nus.edu.sg/index.php?n=Site.Slides
Also in general, use the --debug flag for more help.

How do I move/copy a symlink to a different folder as a symlink under Solaris?

It is an odd behaviour seen only on Solaris that when I try to copy a symbolic link with the "cp -R -P" command to some other folder with a different name, it copies the entire directory/file it's pointing to.
For example:
link -> dir
cp -R -P link folder/new_link
I believe the "-d" argument is what you need.
As per the cp man page:
-d same as --no-dereference --preserve=link
Example:
cp -d -R -P link folder/new_link
I was using "cp -d" and that worked for me.
The cp man page seems to say that you want to use an '-H' to preserve symlinks within the source directory.
You might consider copying via tar, like tar -cf - srcdir|(cd somedir;tar -xf -)
Try using cpio (with the -p (pass) option) or the old tar in a pipe trick.