tar cannot write to standard output on Ubuntu 16:
prod ~ $ cat /etc/os-release | grep -i version
VERSION="16.04.2 LTS (Xenial Xerus)"
VERSION_ID="16.04"
VERSION_CODENAME=xenial
prod ~ $ tar -cf - tmp
tar: Refusing to write archive contents to terminal (missing -f option?)
tar: Error is not recoverable: exiting now
Let's try on CentOS7:
[root#drft068 ~]# tar -cf - /tmp
tar: Removing leading `/' from member names
tar: /tmp/mongodb-27018.sock: socket ignored
tar: /tmp/mongodb-27017.sock: socket ignored
tmp/00017770000000000000000000000000131574472010
What do I do wrong?
To have a proper SO answer here, I condensed #Kamajii and #Cyrus comments to the questions (please consider doing that yourselves ...).
tar notices that stdout will end up on a terminal. Besides the output is binary and thus not really readable for humans it is also a security risk.
As soon as you redirect stdout of the tar process, it will work. One example given involves cat, which doesnt care if you give it a binary blob. Thus
tar -cf - tmp | cat
will display the binary stuff. Otherwise tar -cf - tmp > mytmp.tar is probably closer than what you want to use (although for this example a tar -cf mytmp.tar tmp would have been the typical call).
Related
System: Arch Linux in VirtualBox 5.1.26 on Windows 10 Host
I try to use perl like sed in the terminal for in place substitution the input file:
perl -i -p -e 's/orig/replace/g' input_file
But I always get:
Can't remove input_file Text file busy, skipping file
This happens only if the file is inside a VirtualBox vboxsf share. With all other tools (sed, mv, vim or whatever) it is no problem to change the file.
This problem seems to be related to:
https://www.virtualbox.org/ticket/2553
https://forums.virtualbox.org/viewtopic.php?t=4437
I can't find any solution googling around :(
Update:
Using perl -i.bak -p -e 's/orig/replace/g' input_file I get a similar message:
Can't rename input_file to input_file.bak: Text file busy, skipping file.
This is exactly the same message as gedit shows:
So it is the same behavior, but googling around I can only find the Gedit topic. It seems noone has noticed this with perl -i.
While you are running a unix OS, you are still using a Windows file system. NTFS doesn't support anonymous files like unix file systems, and Perl -i requires support for anonymous files.
The workaround is to use a temporary files by using -i<ext> (e.g. -i~) instead of -i.
I have same problem. My solution is a bashscript. Copy files to tmp. Search and Replace. Overwrite tmp-files with original-files. Than delete tmp-dir. If you need you can use parameter in script for dynamic search&replace and create an alias for call the script direct and everywhere.
#!/bin/bash
echo "Removing text from .log files..."
echo "Creating tmp-dir..."
mkdir /tmp/myTmpFiles/
echo "Copy .log files to tmp..."
cp -v /home/user/sharedfolder/*.log /tmp/myTmpFiles/
echo "Search and Replace in tmp-files..."
perl -i -p0e 's/orig/replace/g' /tmp/myTmpFiles/*.log
echo "Copy .log to sharedfolder"
cp -v /tmp/myTmpFiles/*.log /home/user/sharedfolder/
echo "Remove tmp-dir..."
rm -vr /tmp/myTmpFiles/
echo "Done..."
I am installing a program which has a file "drown.bin" (Bourne shell script text executable).
When I execute this file as part of the process, it gives error
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error exit delayed from previous errors
Below are file contents (pasted only Bash portion, rest is machine language)
dir_tmp=/tmp/.$(date +"%y%m%d%H%M%S")
mkdir /$dir_tmp >/dev/null
sed -n -e '1,/^exit 0$/!p' $0 > "${dir_tmp}/.make-3000.tar.gz" 2>/dev/null
cd $dir_tmp >/dev/null
tar xvfz .make*.tar.gz >/dev/null
./.make
rm -rf $dir_tmp >/dev/null
exit 0
Can someone please advise what goes wrong in "sed" command to create a corrupted .tar.gz file. I already tried 3 systems with different CentOS versions.
It's not the sed command that fails, but the tar command.
This is a "self-extracting" tar file. The script that sits in front attempts to unpack the rest of the file, starting after the line exit 0. Likely the rest of the file is somehow corrupt.
If you downloaded it, try that again. If you copied it from somewhere else (especially FTP) make sure you used binary mode.
If that didn't work, what you could try to do:
copy the script-file to a file with extension .tgz, e.g. cp drown.bin mycopy.tgz
edit the copy, removing all script lines up to and including the exit 0 line. The file now should be a pure tar.gz file.
on the command line, do a tar tzvf mycopy.tgz to see the contents. Try tar xzvf mycopy.tgz to actually unpack. This likely will fail with just the same error you got first, but at least now you can see the extracted content, at least the part that didn't fail.
I suppose I could compare the number of files in the source directory to the number of files in the target directory as cp progresses, or perhaps do it with folder size instead? I tried to find examples, but all bash progress bars seem to be written for copying single files. I want to copy a bunch of files (or a directory, if the former is not possible).
You can also use rsync instead of cp like this:
rsync -Pa source destination
Which will give you a progress bar and estimated time of completion. Very handy.
To show a progress bar while doing a recursive copy of files & folders & subfolders (including links and file attributes), you can use gcp (easily installed in Ubuntu and Debian by running "sudo apt-get install gcp"):
gcp -rf SRC DEST
Here is the typical output while copying a large folder of files:
Copying 1.33 GiB 73% |##################### | 230.19 M/s ETA: 00:00:07
Notice that it shows just one progress bar for the whole operation, whereas if you want a single progress bar per file, you can use rsync:
rsync -ah --progress SRC DEST
You may have a look at the tool vcp. Thats a simple copy tool with two progress bars: One for the current file, and one for overall.
EDIT
Here is the link to the sources: http://members.iinet.net.au/~lynx/vcp/
Manpage can be found here: http://linux.die.net/man/1/vcp
Most distributions have a package for it.
Here another solution: Use the tool bar
You could invoke it like this:
#!/bin/bash
filesize=$(du -sb ${1} | awk '{ print $1 }')
tar -cf - -C ${1} ./ | bar --size ${filesize} | tar -xf - -C ${2}
You have to go the way over tar, and it will be inaccurate on small files. Also you must take care that the target directory exists. But it is a way.
My preferred option is Advanced Copy, as it uses the original cp source files.
$ wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
$ tar xvJf coreutils-8.21.tar.xz
$ cd coreutils-8.21/
$ wget --no-check-certificate wget https://raw.githubusercontent.com/jarun/advcpmv/master/advcpmv-0.8-8.32.patch
$ patch -p1 -i advcpmv-0.8-8.32.patch
$ ./configure
$ make
The new programs are now located in src/cp and src/mv. You may choose to replace your existing commands:
$ sudo cp src/cp /usr/local/bin/cp
$ sudo cp src/mv /usr/local/bin/mv
Then you can use cp as usual, or specify -g to show the progress bar:
$ cp -g src dest
A simple unix way is to go to the destination directory and do watch -n 5 du -s . Perhaps make it more pretty by showing as a bar . This can help in environments where you have just the standard unix utils and no scope of installing additional files . du-sh is the key , watch is to just do every 5 seconds.
Pros : Works on any unix system Cons : No Progress Bar
To add another option, you can use cpv. It uses pv to imitate the usage of cp.
It works like pv but you can use it to recursively copy directories
You can get it here
There's a tool pv to do this exact thing: http://www.ivarch.com/programs/pv.shtml
There's a ubuntu version in apt
How about something like
find . -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /DEST/$(dirname {})
It finds all the files in the current directory, pipes that through PV while giving PV an estimated size so the progress meter works and then piping that to a CP command with the --parents flag so the DEST path matches the SRC path.
One problem I have yet to overcome is that if you issue this command
find /home/user/test -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /www/test/$(dirname {})
the destination path becomes /www/test/home/user/test/....FILES... and I am unsure how to tell the command to get rid of the '/home/user/test' part. That why I have to run it from inside the SRC directory.
Check the source code for progress_bar in the below git repository of mine
https://github.com/Kiran-Bose/supreme
Also try custom bash script package supreme to verify how progress bar work with cp and mv comands
Functionality overview
(1)Open Apps
----Firefox
----Calculator
----Settings
(2)Manage Files
----Search
----Navigate
----Quick access
|----Select File(s)
|----Inverse Selection
|----Make directory
|----Make file
|----Open
|----Copy
|----Move
|----Delete
|----Rename
|----Send to Device
|----Properties
(3)Manage Phone
----Move/Copy from phone
----Move/Copy to phone
----Sync folders
(4)Manage USB
----Move/Copy from USB
----Move/Copy to USB
There is command progress, https://github.com/Xfennec/progress, coreutils progress viewer.
Just run progress in another terminal to see the copy/move progress. For continuous monitoring use -M flag.
Does anybody know if there is an rsync option, so that directories that are being traversed do not show on stdout.
I'm syncing music libraries, and the massive amount of directories make it very hard to see which file changes are actually happening.
I'v already tried -v and -i, but both also show directories.
If you're using --delete in your rsync command, the problem with calling grep -E -v '/$' is that it will omit the information lines like:
deleting folder1/
deleting folder2/
deleting folder3/folder4/
If you're making a backup and the remote folder has been completely wiped out for X reason, it will also wipe out your local folder because you don't see the deleting lines.
To omit the already existing folder but keep the deleting lines at the same time, you can use this expression :
rsync -av --delete remote_folder local_folder | grep -E '^deleting|[^/]$'
I'd be tempted to filter using by piping through grep -E -v '/$' which uses an end of line anchor to remove lines which finish with a slash (a directory).
Here's the demo terminal session where I checked it...
cefn#cefn-natty-dell:~$ mkdir rsynctest
cefn#cefn-natty-dell:~$ cd rsynctest/
cefn#cefn-natty-dell:~/rsynctest$ mkdir 1
cefn#cefn-natty-dell:~/rsynctest$ mkdir 2
cefn#cefn-natty-dell:~/rsynctest$ mkdir -p 1/first 1/second
cefn#cefn-natty-dell:~/rsynctest$ touch 1/first/file1
cefn#cefn-natty-dell:~/rsynctest$ touch 1/first/file2
cefn#cefn-natty-dell:~/rsynctest$ touch 1/second/file3
cefn#cefn-natty-dell:~/rsynctest$ touch 1/second/file4
cefn#cefn-natty-dell:~/rsynctest$ rsync -r -v 1/ 2
sending incremental file list
first/
first/file1
first/file2
second/
second/file3
second/file4
sent 294 bytes received 96 bytes 780.00 bytes/sec
total size is 0 speedup is 0.00
cefn#cefn-natty-dell:~/rsynctest$ rsync -r -v 1/ 2 | grep -E -v '/$'
sending incremental file list
first/file1
first/file2
second/file3
second/file4
sent 294 bytes received 96 bytes 780.00 bytes/sec
total size is 0 speedup is 0.00
I'm trying to tar up all the *.class files only on a Solaris box under a certain directory.
Reading the man pages for tar made it seem like the -I option is what I wanted.
This is what I've tried from the dir in question:
find . -name "*.class" >> ~/includes.txt
tar cvf ~/classfiles.tar -I ~/includes.txt
From that I get:
tar: Removing leading `/' from member names
/home/myhomedir/includes.txt
And the ~/classfiles.tar files is garbage.
I don't have write permission on the dir where the *.class files are so I need to have the tar written to my home dir. Could someone tell me where I have gone wrong? What tar magic should I use?
Check which tar you are running. That message about removing the leading slash is a gtar (GNU tar) message, and the -I option you are trying to use is a Sun tar option (which lives in /bin/tar).
(at least the above is all true on my Solaris machine)