how to undo gzip command in centos?
sudo gzip -r plugins
if I try sudo gunzip -r plugins it give me an error not in gzip format
what I want to do is zip the directory.
tar -zcvf archive.tar.gz directory/
check this answers https://unix.stackexchange.com/a/93158 https://askubuntu.com/a/553197 & https://www.centos.org/docs/2/rhl-gsg-en-7.2/s1-zip-tar.html
sudo find . -name "*.gz" -exec gunzip {} \;
I think you have two questions
How do I undo what I did?
How do I zip a directory
Have you even looked at man gzip or gzip --help?
Answers
find plugins -type f -name "*gz" | xargs gunzip
tar -zcvf plugins.tar.gz plugins
2b. I suspect that your level of linux experience is low so you'd probably be more comfortable using zip. (Remember to do a zip --help or man zip before coming for more advice.)
Explanation. gzip only zips up one file. If you want to do a bunch of files, you have to smush them up into one file first (using tar) and then compress that using gzip.
What you did was recursively gzip up each individual file in plugins/.
Related
I want to download an entire website using the wget -r command and change the name of the file.
I have tried with:
wget -r -o doc.txt "http....
hoping that the OS would have automatically create file in order like doc1.txt doc2.txt but It actually save the stream of the stdout in that file.
Is there any way to do this with just one command?
Thanks!
-r tells wget to recursively get resources from a host.
-o file saves log messages to file instead of the standard error. I think that is not what you are looking for, I think it is -O file.
-O file stores the resource(s) in the given file, instead of creating a file in the current directory with the name of the resource. If used in conjunction with -r, it causes wget to store all resources concatenated to that file.
Since wget -r downloads and stores more than one file, recreating the server file tree in the local system, it has no sense to indicate the name of one file to store.
If what you want is to rename all downloaded files to match the pattern docX.txt, you can do it with a different command after wget has end:
wget -r http....
i=1
while read file
do
mv "$file" "$(dirname "$file")/doc$i.txt"
i=$(( $i + 1 ))
done < <(find . -type f)
I suppose I could compare the number of files in the source directory to the number of files in the target directory as cp progresses, or perhaps do it with folder size instead? I tried to find examples, but all bash progress bars seem to be written for copying single files. I want to copy a bunch of files (or a directory, if the former is not possible).
You can also use rsync instead of cp like this:
rsync -Pa source destination
Which will give you a progress bar and estimated time of completion. Very handy.
To show a progress bar while doing a recursive copy of files & folders & subfolders (including links and file attributes), you can use gcp (easily installed in Ubuntu and Debian by running "sudo apt-get install gcp"):
gcp -rf SRC DEST
Here is the typical output while copying a large folder of files:
Copying 1.33 GiB 73% |##################### | 230.19 M/s ETA: 00:00:07
Notice that it shows just one progress bar for the whole operation, whereas if you want a single progress bar per file, you can use rsync:
rsync -ah --progress SRC DEST
You may have a look at the tool vcp. Thats a simple copy tool with two progress bars: One for the current file, and one for overall.
EDIT
Here is the link to the sources: http://members.iinet.net.au/~lynx/vcp/
Manpage can be found here: http://linux.die.net/man/1/vcp
Most distributions have a package for it.
Here another solution: Use the tool bar
You could invoke it like this:
#!/bin/bash
filesize=$(du -sb ${1} | awk '{ print $1 }')
tar -cf - -C ${1} ./ | bar --size ${filesize} | tar -xf - -C ${2}
You have to go the way over tar, and it will be inaccurate on small files. Also you must take care that the target directory exists. But it is a way.
My preferred option is Advanced Copy, as it uses the original cp source files.
$ wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
$ tar xvJf coreutils-8.21.tar.xz
$ cd coreutils-8.21/
$ wget --no-check-certificate wget https://raw.githubusercontent.com/jarun/advcpmv/master/advcpmv-0.8-8.32.patch
$ patch -p1 -i advcpmv-0.8-8.32.patch
$ ./configure
$ make
The new programs are now located in src/cp and src/mv. You may choose to replace your existing commands:
$ sudo cp src/cp /usr/local/bin/cp
$ sudo cp src/mv /usr/local/bin/mv
Then you can use cp as usual, or specify -g to show the progress bar:
$ cp -g src dest
A simple unix way is to go to the destination directory and do watch -n 5 du -s . Perhaps make it more pretty by showing as a bar . This can help in environments where you have just the standard unix utils and no scope of installing additional files . du-sh is the key , watch is to just do every 5 seconds.
Pros : Works on any unix system Cons : No Progress Bar
To add another option, you can use cpv. It uses pv to imitate the usage of cp.
It works like pv but you can use it to recursively copy directories
You can get it here
There's a tool pv to do this exact thing: http://www.ivarch.com/programs/pv.shtml
There's a ubuntu version in apt
How about something like
find . -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /DEST/$(dirname {})
It finds all the files in the current directory, pipes that through PV while giving PV an estimated size so the progress meter works and then piping that to a CP command with the --parents flag so the DEST path matches the SRC path.
One problem I have yet to overcome is that if you issue this command
find /home/user/test -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /www/test/$(dirname {})
the destination path becomes /www/test/home/user/test/....FILES... and I am unsure how to tell the command to get rid of the '/home/user/test' part. That why I have to run it from inside the SRC directory.
Check the source code for progress_bar in the below git repository of mine
https://github.com/Kiran-Bose/supreme
Also try custom bash script package supreme to verify how progress bar work with cp and mv comands
Functionality overview
(1)Open Apps
----Firefox
----Calculator
----Settings
(2)Manage Files
----Search
----Navigate
----Quick access
|----Select File(s)
|----Inverse Selection
|----Make directory
|----Make file
|----Open
|----Copy
|----Move
|----Delete
|----Rename
|----Send to Device
|----Properties
(3)Manage Phone
----Move/Copy from phone
----Move/Copy to phone
----Sync folders
(4)Manage USB
----Move/Copy from USB
----Move/Copy to USB
There is command progress, https://github.com/Xfennec/progress, coreutils progress viewer.
Just run progress in another terminal to see the copy/move progress. For continuous monitoring use -M flag.
I have a years worth of log files that are all in .gz files. Is there a command I can use to extract these all at once into their current directory? I tried unzip *.gz but doesn't work. Any other suggestions?
shell sciprt?
#!/bin/ksh
TEMPFILE=tempksh_$$.tmp #create a file name
> $TEMPFILE #create a file w/ name
ls -l | grep '.*\.gz$' \ #make dynamic shell script
| awk '{printf "unzip %s;\n", $9;}' \ #with unzip cmd for each item
>> $TEMPFILE #and write to TEMPFILE
chmod 755 $TEMPFILE #give run permissions
./$TEMPFILE #and run it
rm -f $TEMPFILE #clean up
Untested but i think you get the idea....
Actually a little fiddling and gets far simpler...
set -A ARR *.gz;
for i in ${characters[#]}; do `unzip $i`; done;
unset ARR;
For googles sake, since it took me here, it's as simple as this:
gzip -dc access.log.*.gz > access.log
As noted in a comment, you want to use gunzip, not gzip. unzip is for .zip files. gzip is for .gz files. Two completely different formats.
gunzip *.gz
or:
gzip -d *.gz
That will delete the .gz files after successfully decompressing them. If you'd like to keep all of the original .gz files, then:
gzip -dk *.gz
$ find /tmp/a1
/tmp/a1
/tmp/a1/b2
/tmp/a1/b1
/tmp/a1/b1/x1
simply trying
find /tmp/a1 -exec tar -cvf dirall.tar {} \;
simply doesn't work
any help
The command specified for -exec is run once for each file found. As such, you're recreating dirall.tar every time the command is run. Instead, you should pipe the output of find to tar.
find /tmp/a1 -print0 | tar --null -T- -cvf dirall.tar
Note that if you're simply using find to get a list of all the files under /tmp/a1 and not doing any sort of filtering, it's much simpler to use tar -cvf dirall.tar /tmp/a1.
You're one character away from the solution. The find command's exec option will execute the command for each file found, so you should replace -c with -r to put tar into append mode. Each time find invokes it, it'll tack on one more file:
rm -f dirall.tar
find /tmp/a1 -exec tar -rvf dirall.tar {} \;
I'd think something like "find /tmp/a1 | xargs tar cvf foo.tar" would work. But make sure you have backups first!
Does hpux have cpio ?
That will take a list of files on stdin and some versions
will write output in tar format.
I'm trying to tar up all the *.class files only on a Solaris box under a certain directory.
Reading the man pages for tar made it seem like the -I option is what I wanted.
This is what I've tried from the dir in question:
find . -name "*.class" >> ~/includes.txt
tar cvf ~/classfiles.tar -I ~/includes.txt
From that I get:
tar: Removing leading `/' from member names
/home/myhomedir/includes.txt
And the ~/classfiles.tar files is garbage.
I don't have write permission on the dir where the *.class files are so I need to have the tar written to my home dir. Could someone tell me where I have gone wrong? What tar magic should I use?
Check which tar you are running. That message about removing the leading slash is a gtar (GNU tar) message, and the -I option you are trying to use is a Sun tar option (which lives in /bin/tar).
(at least the above is all true on my Solaris machine)