Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to copy a link on Solaris OS but find that it does not simply copy the link instead copies the whole contents of the directory/file the link is poinitng to? Which is not in other OSes like AIX,HP-UX,Linux.
Is this a normal behaviour of Solaris OS?
Charlie was close, you want the -L, -H or -P flags with the -R flag (probably just -R -P). Similar flags exist for chmod(1) and chgrp(1). I've pasted an excerpt from the man-page below.
Example:
$ touch x
$ ln -s x y
$ ls -l x y
-rw-r--r-- 1 mjc mjc 0 Mar 31 18:58 x
lrwxrwxrwx 1 mjc mjc 1 Mar 31 18:58 y -> x
$ cp -R -P y z
$ ls -l z
lrwxrwxrwx 1 mjc mjc 1 Mar 31 18:58 z -> x
$
Alternatively, plain old tar will happily work with symbolic links by default, even the venerable version that ships with Solaris:
tar -cf foo | ( cd bar && tar -xf - )
(where foo is a symlink or a directory containing symlinks).
/usr/bin/cp -r | -R [-H | -L | -P] [-fip#] source_dir... target
...
-H Takes actions based on the type and contents of the
file referenced by any symbolic link specified as a
source_file operand.
If the source_file operand is a symbolic link, then cp
copies the file referenced by the symbolic link for
the source_file operand. All other symbolic links
encountered during traversal of a file hierarchy are
preserved.
-L Takes actions based on the type and contents of the
file referenced by any symbolic link specified as a
source_file operand or any symbolic links encountered
during traversal of a file hierarchy.
Copies files referenced by symbolic links. Symbolic
links encountered during traversal of a file hierarchy
are not preserved.
-P Takes actions on any symbolic link specified as a
source_file operand or any symbolic link encountered
during traversal of a file hierarchy.
Copies symbolic links. Symbolic links encountered dur-
ing traversal of a file hierarchy are preserved.
You want cp -P I believe (check the man page, as I don't have a solaris box handy right now.) I faintly suspect that's a System V-ism, but wouldn't swear to it.
It sounds like you're trying to duplicate a single symlink.
You might want to just do:
link_destination=`/bin/ls -l /opt/standard_perl/link|awk '{print $10}'`
ln -s $link_destination /opt/standard_perl/link_new
If you are trying to copy a directory hierarchy, this can be very difficult to do in general without the GNU tools (or rsync). While there are solutions that often work, there is no solution that works on every "standard" unix with every type of filename you might encounter. If you're going to be doing this regularly, you should install the GNU coreutils, find, cpio, and tar, and also rsync as well.
Will cpio do the trick for you?
Related
I need to check if local file is same as remote host file.
The file locations are like below:
File1 at Local machine
./remotehostname/home/a/b/scripts/xyz.cpp
File2 at remote machine
remotehostname:/home/a/b/scripts/xyz.cpp
I intend to compare these 2 files, using the command
diff ./remotehostname/home/a/b/scripts/xyz.cpp remotehostname:/home/a/b/scripts/xyz.cpp
find . -type f | grep -v .svn |xargs -I % diff %
I need to change % to take remotehost and compare the file.
Not sure how to apply sed on %. Or is there a better way to compare such files.
One way could be to save the list of files and then apply sed on that file, but I think there should be an even better way. Also the diff doesnt work on remote hosts, maybe I need to use output of dry rsync?
This can be done with xargs, but I prefer to use while read in bash.
xargs method
find . -type f | grep -v .svn | sed 's/.*/& remotehostname:&/' | xargs -n2 diff
The sed command duplicates the input and makes whatever modifications you need. The xargs then passes the inputs to diff two at a time. This will not work if any filename contain spaces.
bash method
find . -type f | grep -v .svn | while read line; do
diff "$line" "remotehostname:$line"
done
The bash read command reads a line from stdin, places it in the name variable, $line, and returns true. You can then put whatever you like inside the loop, so you get total freedom to rewrite the filename however you need. When the input runs out, read returns false, and the loop exits.
Note that piping things into loops has some interesting side effects that are not relevant here, but might bite you one day.
If you are interested in the actual difference (and not just whether they differ - which rsync is brilliant for telling you) then you can do this using GNU Parallel:
find . -type f | grep -v .svn |
parallel diff {} '<(ssh {= s:./::;s:/.*:: =} cat {= s:([^/]+/){2,2}::;$_=::shell_quote_scalar($_) =})'
s:./::;s:/.*:: = hostname from path
s:([^/]+/){2,2}:: = rest of path
::shell_quote_scalar = \-quote special chars as needed by the shell
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
I suppose I could compare the number of files in the source directory to the number of files in the target directory as cp progresses, or perhaps do it with folder size instead? I tried to find examples, but all bash progress bars seem to be written for copying single files. I want to copy a bunch of files (or a directory, if the former is not possible).
You can also use rsync instead of cp like this:
rsync -Pa source destination
Which will give you a progress bar and estimated time of completion. Very handy.
To show a progress bar while doing a recursive copy of files & folders & subfolders (including links and file attributes), you can use gcp (easily installed in Ubuntu and Debian by running "sudo apt-get install gcp"):
gcp -rf SRC DEST
Here is the typical output while copying a large folder of files:
Copying 1.33 GiB 73% |##################### | 230.19 M/s ETA: 00:00:07
Notice that it shows just one progress bar for the whole operation, whereas if you want a single progress bar per file, you can use rsync:
rsync -ah --progress SRC DEST
You may have a look at the tool vcp. Thats a simple copy tool with two progress bars: One for the current file, and one for overall.
EDIT
Here is the link to the sources: http://members.iinet.net.au/~lynx/vcp/
Manpage can be found here: http://linux.die.net/man/1/vcp
Most distributions have a package for it.
Here another solution: Use the tool bar
You could invoke it like this:
#!/bin/bash
filesize=$(du -sb ${1} | awk '{ print $1 }')
tar -cf - -C ${1} ./ | bar --size ${filesize} | tar -xf - -C ${2}
You have to go the way over tar, and it will be inaccurate on small files. Also you must take care that the target directory exists. But it is a way.
My preferred option is Advanced Copy, as it uses the original cp source files.
$ wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
$ tar xvJf coreutils-8.21.tar.xz
$ cd coreutils-8.21/
$ wget --no-check-certificate wget https://raw.githubusercontent.com/jarun/advcpmv/master/advcpmv-0.8-8.32.patch
$ patch -p1 -i advcpmv-0.8-8.32.patch
$ ./configure
$ make
The new programs are now located in src/cp and src/mv. You may choose to replace your existing commands:
$ sudo cp src/cp /usr/local/bin/cp
$ sudo cp src/mv /usr/local/bin/mv
Then you can use cp as usual, or specify -g to show the progress bar:
$ cp -g src dest
A simple unix way is to go to the destination directory and do watch -n 5 du -s . Perhaps make it more pretty by showing as a bar . This can help in environments where you have just the standard unix utils and no scope of installing additional files . du-sh is the key , watch is to just do every 5 seconds.
Pros : Works on any unix system Cons : No Progress Bar
To add another option, you can use cpv. It uses pv to imitate the usage of cp.
It works like pv but you can use it to recursively copy directories
You can get it here
There's a tool pv to do this exact thing: http://www.ivarch.com/programs/pv.shtml
There's a ubuntu version in apt
How about something like
find . -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /DEST/$(dirname {})
It finds all the files in the current directory, pipes that through PV while giving PV an estimated size so the progress meter works and then piping that to a CP command with the --parents flag so the DEST path matches the SRC path.
One problem I have yet to overcome is that if you issue this command
find /home/user/test -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /www/test/$(dirname {})
the destination path becomes /www/test/home/user/test/....FILES... and I am unsure how to tell the command to get rid of the '/home/user/test' part. That why I have to run it from inside the SRC directory.
Check the source code for progress_bar in the below git repository of mine
https://github.com/Kiran-Bose/supreme
Also try custom bash script package supreme to verify how progress bar work with cp and mv comands
Functionality overview
(1)Open Apps
----Firefox
----Calculator
----Settings
(2)Manage Files
----Search
----Navigate
----Quick access
|----Select File(s)
|----Inverse Selection
|----Make directory
|----Make file
|----Open
|----Copy
|----Move
|----Delete
|----Rename
|----Send to Device
|----Properties
(3)Manage Phone
----Move/Copy from phone
----Move/Copy to phone
----Sync folders
(4)Manage USB
----Move/Copy from USB
----Move/Copy to USB
There is command progress, https://github.com/Xfennec/progress, coreutils progress viewer.
Just run progress in another terminal to see the copy/move progress. For continuous monitoring use -M flag.
I am new to accurev, used to use SVN earlier. I want to a get diff file consisting of all the changes in kept files in a given directory. I know ac diff -b <file>
gives diff in a file, but if I have many files and I want the diff of all the kept files in a given directory, is there a straight forward command to do this like svn diff?
You are going to need to create a script if you only want to diff kept files in a given directory. Basically you will run an 'accurev stat -k' -> parse output for given directory -> 'accurev diff -b'
On a *NIX machine the commands below work nicely.
The -k option to AccuRev's stat command says find the file with "(kept)" status. Using the -fal options to stat provides just the Depot relative pathway to the file. No addition filtering needed. So the command line would be:
accurev stat -k -fal | xargs accurev diff -b
Produces output like:
accurev stat -k -fal | xargs accurev diff -b
diffing element /./COPYING
341a342
> Tue Mar 18 08:38:39 EDT 2014
> Change for demo purposes.
diffing element /./INSTALL
3a4,7
> New Change
>
> Another Change
>
Dave
with Wget I normally receive only one -- index.html file. I enter the following string:
wget -e robots=off -r http://www.korpora.org/kant/aa03
which gives back an index.html file, alas, only.
The directory aa03 implies Kant's book, volume 3, there must be some 560 files (pages) or so in it. These pages are readable online, but will not be downloaded. Any remedy?! THX
Following that link brings us to:
http://korpora.zim.uni-duisburg-essen.de/kant/aa03/
wget won't follow links that point to domains not specified by the user. Since korpora.zim.uni-duisburg-essen.de is not equal to korpora.org, wget will not follow the links on the index page.
To remedy this, use --span-hosts or -H. -rH is a VERY dangerous combination - combined, you can accidentally crawl the entire Internet - so you'll want to keep its scope very tightly focused. This command will do what you intended to do:
wget -e robots=off -rH -l inf -np -D korpora.org,korpora.zim.uni-duisburg-essen.de http://korpora.org/kant/aa03/index.html
(-np, or --no-parent, will limit the crawl to aa03/. -D will limit it to only those two domains. -l inf will crawl infinitely deep, constrained by -D and -np).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have two directories containing source files to a project I've inherited with little by way of documentation. How do I compare both directories to make see what the differences are?
Try this:
diff -Naur dir1/ dir2/
The -u option makes the output a
little easier to read.
The -r option recurses through all
subdirectories
The -N and -a options are really
only necessary if you wanted to create
a patch file.
You can try Meld. It is a wonderful visual diff tool ;-)
diff -u -r dirA dirB
Will show you a unified recursive diff between the files in dirA and dirB
You may use the diff command in the shell. Or install a tool like KDiff3.
The diff command to compare directories kept telling me that I didn't have differences, when I knew there were differences.
Instead of using diff directly, I used a sorted list of md5sums and then compared those files with diff:
find /path1/dir/ -type f -exec md5sum {} + | awk '{print $2 $1}' | sort >! path1.log
find /path2/dir/ -type f -exec md5sum {} + | awk '{print $2 $1}' | sort >! path2.log
gvimdiff path1.log path2.log
If the beginning part of the path is causing headaches, then change it. Select the Path1 window and type:
:%s|path1|path2|g
This will replace all instances of path1 with path2 in the first file, and now your diff should only show differences.