Odd Behaviour in Removing .git - command-line

I tried to remove my Git-files:
rm -R .git | yes
My CPU becomes loud, and no file is removed. I cannot understand what is going on. How can I remove my .git-files?

Try
yes | rm -r .git
You were passing the output of rm to yes (flow is left->right), but as yes does not read stdin, rm was just left hanging there.
Also, you do not really need yes anyway. As the only questions you seriously want to answer with 'yes' in an automated fashion are whether to delete read-only files, you can use the -f parameter ('force'):
rm -rf .git

.git is likely to have a lot of files under it. Try using
$ rm -Rvf .git
that way it will show you what files are being deleted.

It looks like you're trying to deal with rm asking for confirmation before each deletion by piping the output of yes, which produces and infinite number of "y" characters into rm, but you're doing it wrong.
rm -Rf .git # the -f option is "force", i.e. don't ask for confirmation.
If you want to pipe the output of one command into another, the source has to come first, before the pipe:
yes | head

Related

Unable to delete thousands of files within a folder in terminal

I'm trying to delete files inside a certain folder but it's throwing an error:
rm -rf /usr/html/sched/downloads/*
-bash: /bin/rm: Argument list too long
I searched online and found this solution but I'm afraid to try it being a production server and I don't know how to put the path correctly:
find . -name '*' | xargs rm -v
How can I delete thousands of files within the /downloads director? FYI, there's no sub-directory.
I think here you can check how you can handle it because for a large scale of files you will need to do it by a specific quantity by milliseconds.
find ./cache -mtime +0.5 -print0 | xargs -0 rm -f
Faster way to delete a large number of files [duplicate]

how to print the progress of the files being copied in bash [duplicate]

I suppose I could compare the number of files in the source directory to the number of files in the target directory as cp progresses, or perhaps do it with folder size instead? I tried to find examples, but all bash progress bars seem to be written for copying single files. I want to copy a bunch of files (or a directory, if the former is not possible).
You can also use rsync instead of cp like this:
rsync -Pa source destination
Which will give you a progress bar and estimated time of completion. Very handy.
To show a progress bar while doing a recursive copy of files & folders & subfolders (including links and file attributes), you can use gcp (easily installed in Ubuntu and Debian by running "sudo apt-get install gcp"):
gcp -rf SRC DEST
Here is the typical output while copying a large folder of files:
Copying 1.33 GiB 73% |##################### | 230.19 M/s ETA: 00:00:07
Notice that it shows just one progress bar for the whole operation, whereas if you want a single progress bar per file, you can use rsync:
rsync -ah --progress SRC DEST
You may have a look at the tool vcp. Thats a simple copy tool with two progress bars: One for the current file, and one for overall.
EDIT
Here is the link to the sources: http://members.iinet.net.au/~lynx/vcp/
Manpage can be found here: http://linux.die.net/man/1/vcp
Most distributions have a package for it.
Here another solution: Use the tool bar
You could invoke it like this:
#!/bin/bash
filesize=$(du -sb ${1} | awk '{ print $1 }')
tar -cf - -C ${1} ./ | bar --size ${filesize} | tar -xf - -C ${2}
You have to go the way over tar, and it will be inaccurate on small files. Also you must take care that the target directory exists. But it is a way.
My preferred option is Advanced Copy, as it uses the original cp source files.
$ wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
$ tar xvJf coreutils-8.21.tar.xz
$ cd coreutils-8.21/
$ wget --no-check-certificate wget https://raw.githubusercontent.com/jarun/advcpmv/master/advcpmv-0.8-8.32.patch
$ patch -p1 -i advcpmv-0.8-8.32.patch
$ ./configure
$ make
The new programs are now located in src/cp and src/mv. You may choose to replace your existing commands:
$ sudo cp src/cp /usr/local/bin/cp
$ sudo cp src/mv /usr/local/bin/mv
Then you can use cp as usual, or specify -g to show the progress bar:
$ cp -g src dest
A simple unix way is to go to the destination directory and do watch -n 5 du -s . Perhaps make it more pretty by showing as a bar . This can help in environments where you have just the standard unix utils and no scope of installing additional files . du-sh is the key , watch is to just do every 5 seconds.
Pros : Works on any unix system Cons : No Progress Bar
To add another option, you can use cpv. It uses pv to imitate the usage of cp.
It works like pv but you can use it to recursively copy directories
You can get it here
There's a tool pv to do this exact thing: http://www.ivarch.com/programs/pv.shtml
There's a ubuntu version in apt
How about something like
find . -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /DEST/$(dirname {})
It finds all the files in the current directory, pipes that through PV while giving PV an estimated size so the progress meter works and then piping that to a CP command with the --parents flag so the DEST path matches the SRC path.
One problem I have yet to overcome is that if you issue this command
find /home/user/test -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /www/test/$(dirname {})
the destination path becomes /www/test/home/user/test/....FILES... and I am unsure how to tell the command to get rid of the '/home/user/test' part. That why I have to run it from inside the SRC directory.
Check the source code for progress_bar in the below git repository of mine
https://github.com/Kiran-Bose/supreme
Also try custom bash script package supreme to verify how progress bar work with cp and mv comands
Functionality overview
(1)Open Apps
----Firefox
----Calculator
----Settings
(2)Manage Files
----Search
----Navigate
----Quick access
|----Select File(s)
|----Inverse Selection
|----Make directory
|----Make file
|----Open
|----Copy
|----Move
|----Delete
|----Rename
|----Send to Device
|----Properties
(3)Manage Phone
----Move/Copy from phone
----Move/Copy to phone
----Sync folders
(4)Manage USB
----Move/Copy from USB
----Move/Copy to USB
There is command progress, https://github.com/Xfennec/progress, coreutils progress viewer.
Just run progress in another terminal to see the copy/move progress. For continuous monitoring use -M flag.

wget - does -nc option skip downloading existed files?

I'm downloading a website with wget. the command is as below :
wget -nc --recursive --page-requisites --html-extension --convert-links --restrict-file-names=windows --domain any-domain.com --no-parent http://any-domain.com/any-page.html
does -nc option skip downloading existed files even when we download a website recursively? It seems -nc option not works.
the man say that :
-nc
--no-clobber
If a file is downloaded more than once in the same directory, Wget's behavior depends on a few options, including -nc. In certain cases, the local file will be clobbered, or overwritten, upon
repeated download.
Here is more details (from the man too) :
When running Wget with -r or -p, but without -N, -nd, or -nc, re-downloading a file will result in the new copy simply overwriting the old.
Yes, the -nc option will prevent re-download of the file.
The manual page is confusing because it describes all of the related options together.
Here is pertinent bits from the man page:
When running Wget with -r or -p, but without -N or -nc, re-downloading a file will result in the
new copy simply overwriting the old. Adding -nc will prevent this behavior, instead causing the
original version to be preserved and any newer copies on the server to be ignored.
The option --convert-links seams to conflict with -nc. Try removing it.

How to find out what commit a checked out file came from

When I check out a file with git checkout $commit $filename and I forget $commit but still remember $filename, how do I find out what $commit was?
First a non-git answer. Check your shell command history. Well, if you didn't use a shell with command history then you don't...
The git answer. You generally cannot find THE $commit. Generally the same contents might have been part of many commits and I don't think git keeps a log of what single file you have checked out (it keeps a log of previous values of HEAD)
Here is a brute force script git-find-by-contents. Call it with your $filename as parameter, and it will show you all commits where this file was included. As the name says
it searches by contents. So it will find files with any name, as long as the contents matches.
#! /bin/sh
tmpdir=/tmp/$(basename $0)
mkdir $tmpdir 2>/dev/null
rm $tmpdir/* 2>/dev/null
hash=$(git hash-object $1)
echo "finding $hash"
allrevs=$(git rev-list --all)
# well, nearly all revs, we could still check the log if we have
# dangling commits and we could include the index to be perfect...
for rev in $allrevs
do
git ls-tree --full-tree -r $rev >$tmpdir/$rev
done
cd $tmpdir
grep $hash *
rm -r $tmpdir
I would not be surprised if there is a more elegant way, but this has worked for me a couple of times in similar situations.
EDIT: a more techy version of the same problem appears here: Which commit has this blob?
I don't think you can. Git just loads that version of the file into the index and your working dir. There is no reference keeping track of what version it loaded
If this is something that you do often, you could write up a script that could do it using git diff --cached and march through all the commits that changed that file.
You can use
git log <filename>
to find out which commit you want to checkout
If you're lucky, you could maybe try a non-git way:
history | grep "git checkout.*filename"
Try this (untested ;):
$ for commit in $(git log --format=%h $filename); do
> if diff <(git show $commit:$filename) $filename >/dev/null; then
> echo $commit
> fi
> done
Simple and elegant:
$ git log -S"$(cat /path/to/file)"
Only works if the content is unique, then again that's the same for the hash-comparison answers that came before.
It also displays only the first version that matches, rather than all.
Here are the details of a script I polished up as the answer to a similar question, and here you can see it in action:
(source: adamspiers.org)

how to prevent "find" from dive deeper than current directory

I have many directory with lots of files inside them.
I've just compressed that directory respectively become filename.tar.gz, someothername.tar.gz, etc.
After compressing, I use this bash to delete everything except file name contains .tar.gz:
find . ! -name '*.tar.gz*' | xargs rm -r
But the problem is find will dive too deep inside the directory. Because the directory has been deleted but find will dive deep in each directory, many messages displayed, such as:
rm: cannot remove `./dirname/index.html': No such file or directory
So how to prevent find from dive deeper than this level (current directory)?
You can use ls instead of find for your problem:
ls | grep -v .tar.gz | xargs rm -rf
You can tell find the max depth to recurse:
find -maxdepth 1 ....