gzip -d -f *.gz
When I run this,I couldnt see my xml files in .zip files. I just saw files without any extensions.
What is the reason ?
gzip -d -f *.gz
ren file_name* file_name*.xml
it is optional "ren" or "rename" .
You can use this : )
Related
I am aware that you can download from multiple url using:
wget "url1" "url2" "url3"
Renaming the output file can be done via:
wget "url1" -O "new_name1"
But when I tried
wget "url1" "url2" "url3" -O "name1" "name2" "name3"
all the files are using name1.
what is the proper way to do so in a single command?
Yes something like this, You can add a file name next to each URL in the file then do:
while IFS= read -r url fileName;do
wget -O "$fileName" "$url"
done < list
where it is assumed you have added a (unique) file name after each URL in the file (separated by a space).
The -O option allows you to specify the destination file name. But if you're downloading multiple files at once, wget will save all of their content to the file you specify via -O. Note that in either case, the file will be truncated if it already exists. See the man page for more info.
You can exploit this option by telling wget to download the links one-by-one:
while IFS= read -r url;do
fileName="blah" # Add a rule to define a new name for each file here
wget -O "$fileName" "$url"
done < list
hope it useful.
how to undo gzip command in centos?
sudo gzip -r plugins
if I try sudo gunzip -r plugins it give me an error not in gzip format
what I want to do is zip the directory.
tar -zcvf archive.tar.gz directory/
check this answers https://unix.stackexchange.com/a/93158 https://askubuntu.com/a/553197 & https://www.centos.org/docs/2/rhl-gsg-en-7.2/s1-zip-tar.html
sudo find . -name "*.gz" -exec gunzip {} \;
I think you have two questions
How do I undo what I did?
How do I zip a directory
Have you even looked at man gzip or gzip --help?
Answers
find plugins -type f -name "*gz" | xargs gunzip
tar -zcvf plugins.tar.gz plugins
2b. I suspect that your level of linux experience is low so you'd probably be more comfortable using zip. (Remember to do a zip --help or man zip before coming for more advice.)
Explanation. gzip only zips up one file. If you want to do a bunch of files, you have to smush them up into one file first (using tar) and then compress that using gzip.
What you did was recursively gzip up each individual file in plugins/.
I want to download an entire website using the wget -r command and change the name of the file.
I have tried with:
wget -r -o doc.txt "http....
hoping that the OS would have automatically create file in order like doc1.txt doc2.txt but It actually save the stream of the stdout in that file.
Is there any way to do this with just one command?
Thanks!
-r tells wget to recursively get resources from a host.
-o file saves log messages to file instead of the standard error. I think that is not what you are looking for, I think it is -O file.
-O file stores the resource(s) in the given file, instead of creating a file in the current directory with the name of the resource. If used in conjunction with -r, it causes wget to store all resources concatenated to that file.
Since wget -r downloads and stores more than one file, recreating the server file tree in the local system, it has no sense to indicate the name of one file to store.
If what you want is to rename all downloaded files to match the pattern docX.txt, you can do it with a different command after wget has end:
wget -r http....
i=1
while read file
do
mv "$file" "$(dirname "$file")/doc$i.txt"
i=$(( $i + 1 ))
done < <(find . -type f)
I have a years worth of log files that are all in .gz files. Is there a command I can use to extract these all at once into their current directory? I tried unzip *.gz but doesn't work. Any other suggestions?
shell sciprt?
#!/bin/ksh
TEMPFILE=tempksh_$$.tmp #create a file name
> $TEMPFILE #create a file w/ name
ls -l | grep '.*\.gz$' \ #make dynamic shell script
| awk '{printf "unzip %s;\n", $9;}' \ #with unzip cmd for each item
>> $TEMPFILE #and write to TEMPFILE
chmod 755 $TEMPFILE #give run permissions
./$TEMPFILE #and run it
rm -f $TEMPFILE #clean up
Untested but i think you get the idea....
Actually a little fiddling and gets far simpler...
set -A ARR *.gz;
for i in ${characters[#]}; do `unzip $i`; done;
unset ARR;
For googles sake, since it took me here, it's as simple as this:
gzip -dc access.log.*.gz > access.log
As noted in a comment, you want to use gunzip, not gzip. unzip is for .zip files. gzip is for .gz files. Two completely different formats.
gunzip *.gz
or:
gzip -d *.gz
That will delete the .gz files after successfully decompressing them. If you'd like to keep all of the original .gz files, then:
gzip -dk *.gz
I am wondering, i like 7z compression but how do i compress data only? i dont want a file in the archive with file info. Just raw data, how can i do this? it would also be nice if i can remove headers too but that isnt necessary.
From man 7z:
-si Read data from StdIn (eg: tar cf - directory | 7z a -si directory.tar.7z)