So the typical way I would create a diff log/patch between two branches in clearcase would to simply create two views and do a typical unix diff. But I have to assume that there is a more clearcase way (and also a '1-liner').
so knowing how to get a list of all files that have been modified on a branch:
cleartool find . -type f -branch "brtype(<BRANCH_NAME>)" -print
and knowing how to get the diff formatted output for two separate files:
cleartool diff FILE FILE##/main/PARENT_BRANCH_PATH/LATEST
so does anyone see any issues with the following to get a diff for all files that have been changed in a branch?
cleartool find . -type f -branch "brtype(CHILD_BRANCH)" -exec 'cleartool diff -ser $CLEARCASE_PN `echo $CLEARCASE_XPN | sed "s/CHILD_BRANCH/LATEST/"` ' > diff.log
Any modifications and comments are greatly welcomed
thanks in advance!
update: any ideas on how to get this too be a unix unified diff would also be greatly appreciated.
update2: So I think I have my solution, thanks go to VonC for sending me in the right directions:
cleartool find . -type f -branch "brtype(CHILD_BRANCH)" -exec 'cleartool get -to $CLEARCASE_PN.prev `echo $CLEARCASE_XPN | sed "s/CHILD_BRANCH/LATEST/"`; diff -u $CLEARCASE_PN.prev $CLEARCASE_PN; rm -f $CLEARCASE_PN.prev' > CHILD_BRANCH.diff
the output seems to work, I can read the file in via kompare without complaints.
The idea is sound.
I would simply make sure the $CLEARCASE_PN and $CLEARCASE_XPN are used with double quotes around them, to take into account with potential spaces in the file path or file name (as illustrated in "How do I list ClearCase versions without the Fully-qualified version?").
cleartool find . -type f -branch "brtype(CHILD_BRANCH)" -exec 'cleartool diff -ser "$CLEARCASE_PN" `echo "$CLEARCASE_XPN" | sed "s/CHILD_BRANCH/LATEST/"` ' > diff.log
Using simple quotes for the -exec directive is a good idea, as explained in "CLEARCASE_XPN not parsed as variable in clearcase command".
However, cleartool diff, even with the -ser (-serial) option don't produce exactly an Unix unified diff format (or Unified Format for short).
The -diff(_format) option is the closest, as I mention in "How would you measure inserted / changed / removed code lines (LoC)?"
The -diff_format option causes both the headers and differences to be reported in the style of the UNIX and Linux diff utility, writing a list of the changes necessary to convert the first file being compared into the second file.
One idea would be to not use cleartool diff, but use directly diff, since it can access in a dynamic view the right version through the extended pathname of the elements found.
The OP ckcin's solution is close that what I suggested with cleartool get:
cleartool find . -type f -branch "brtype(CHILD_BRANCH)" -exec 'cleartool get -to $CLEARCASE_PN.prev `echo $CLEARCASE_XPN | sed "s/CHILD_BRANCH/LATEST/"`; diff -u $CLEARCASE_PN.prev $CLEARCASE_PN; rm -f $CLEARCASE_PN.prev' > CHILD_BRANCH.diff
the output seems to work, I can read the file in via kompare without complaints.
In multiple line, for readability:
cleartool find . -type f -branch "brtype(CHILD_BRANCH)"
-exec 'cleartool get -to $CLEARCASE_PN.prev
`echo $CLEARCASE_XPN | sed "s/CHILD_BRANCH/LATEST/"`;
diff -u $CLEARCASE_PN.prev $CLEARCASE_PN;
rm -f $CLEARCASE_PN.prev' > CHILD_BRANCH.diff
(Note that $CLEARCASE_XPN and $CLEARCASE_PN are set by the cleartool find commant, they're not variables you set yourself.)
Transferring the answer from VonC and einpoklum to Windows I came up with the following. Create a separate batch file, which I called diffClearCase.bat, this eases up the command line significantly. It creates a separate tree for all modified files, which I personally liked, but the file and folders can be deleted afterwards.
#echo off
SET PLAINFILE=%1
SET PLAINDIR=%~dp1
SET CLEARCASE_FILE=%2
SET BRANCH_NAME=%3
SET SOURCE_DRIVE=T:
SET TARGET_TEMP_DIR=D:
SET DIFF_TARGET_FILE=D:\allPatch.diff
call set BASE_FILE=%%CLEARCASE_FILE:%BRANCH_NAME%=LATEST%%
call set TARGET_FILE=%%PLAINFILE:%SOURCE_DRIVE%=%TARGET_TEMP_DIR%%%
call set TARGET_DIR=%%PLAINDIR:%SOURCE_DRIVE%=%TARGET_TEMP_DIR%%%
echo Diffing file %PLAINFILE%
IF NOT EXIST %TARGET_DIR% mkdir %TARGET_DIR%
cleartool get -to %TARGET_FILE% %BASE_FILE%
diff -u %TARGET_FILE% %PLAINFILE% >> %DIFF_TARGET_FILE%
rem del /F/Q %TARGET_FILE%
And then I created a second bat file which simply takes the branch name as argument. In our case this directory contains multiple VOBs, so I iterate over them and do this per VOB.
#echo off
SET BRANCHNAME=%1
SET DIFF_TARGET_FILE=D:\allPatch.diff
SET SOURCE_DRIVE=T:
SET DIFF_TOOL=D:\Data\Scripts\diffClearCase.bat
IF EXIST %DIFF_TARGET_FILE% DEL /Q %DIFF_TARGET_FILE%
for /D %%V in ("%SOURCE_DRIVE%\*") DO (
echo Checking VOB %%V
cd %%V
cleartool find %%V -type f -branch "brtype(%BRANCHNAME%)" -exec "%DIFF_TOOL% \"%%CLEARCASE_PN%%\" \"%%CLEARCASE_XPN%%\" %BRANCHNAME%"
)
Related
I am very new to Clearcase and one of the task that I have got on my hand is to find frequently modified files in ClearCase, Suppose we have an integration stream and there are numerous files in our stream, need to know of certain files which are modified frequently like a certain file is modified 5 times in last two months.
I have access to ClearCase commands as well as GUI
Is there a way we can have solution to this problem.
Thanks
You can do, following find examples, a search between two dates:
cleartool find . -version "{created_since(date1) &&
!created_since(date2) &&
brtype(myIntStream)" -exec "cleartool descr -fmt "%En"\
|sort| uniq -c | sort -n
(This is the Windows syntax, which means you need GoW (Gnu On Windows) installed for the v and uniq commands.
As Brian Cowan adds in the comments, the command would be:
cleartool find -all -version "{created_since(date1) &&
!created_since(date2) &&
brtype(myIntStream)" -exec "cleartool desc -fmt \"%En\n\" \"%CLEARCASE_XPN\"" \
|sort| uniq -c | sort -n
On Unix:
cleartool find -all -version "{created_since(date1) &&
!created_since(date2) &&
brtype(myIntStream)" -exec 'cleartool desc -fmt "%En\n" "$CLEARCASE_XPN"' \
|sort| uniq -c | sort -n
-all instead of the current directory format, to avoid issues if the command isn't run at the VOB root.
If you don't care about the interval, but only want the last 2 months, drop the !created_since line.
Alternatively, use "today" as the second date, though that would everything modified since midnight your local on the day you run the command.
I'm trying to change the name of "my-silly-home-page-name.html" to "index.html" in all documents within a given master directory and subdirs.
I saw this: Shell script - search and replace text in multiple files using a list of strings.
And this: How to change all occurrences of a word in all files in a directory
I have tried this:
grep -r "my-silly-home-page-name.html" .
This finds the lines on which the text exists, but now I would like to substitute 'my-silly-home-page-name' for 'index'.
How would I do this with sed or perl?
Or do I even need sed/perl?
Something like:
grep -r "my-silly-home-page-name.html" . | sed 's/$1/'index'/g'
?
Also; I am trying this with perl, and I try the following:
perl -i -p -e 's/my-silly-home-page-name\.html/index\.html/g' *
This works, but I get an error when perl encounters directories, saying "Can't do inplace edit: SOMEDIR-NAME is not a regular file, <> line N"
Thanks,
jml
find . -type f -exec \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g' {} +
Or if your find doesn't support -exec +,
find . -type f -print0 | xargs -0 \
perl -i -pe's/my-silly-home-page-name(?=\.html)/index/g'
Both pass to Perl as arguments as many names at a time as possible. Both work with any file name, including those that contains newlines.
If you are on Windows and you are using a Windows build of Perl (as opposed to a cygwin build), -i won't work unless you also do a backup of the original. Change -i to -i.bak. You can then go and delete the backups using
find . -type f -name '*.bak' -delete
This should do the job:
find . -type f -print0 | xargs -0 sed -e 's/my-silly-home-page-name\.html/index\.html/g' -i
Basically it gathers recursively all the files from the given directory (. in the example) with find and runs sed with the same substitution command as in the perl command in the question through xargs.
Regarding the question about sed vs. perl, I'd say that you should use the one you're more comfortable with since I don't expect huge differences (the substitution command is the same one after all).
There are probably better ways to do this but you can use:
find . -name oldname.html |perl -e 'map { s/[\r\n]//g; $old = $_; s/oldname.txt$/newname.html/; rename $old,$_ } <>';
Fyi, grep searches for a pattern; find searches for files.
I am trying to recursively change all .exe in a directory.
I did a bit more digging before posting and ended up finding what I needed. Will post with my answer just on case anyone can used this information. Hope that is alright I am new here.
ct find . -all -name *.bat -print -exec "cleartool protect -chmod +x -file ""%CLEARCASE_PN%"""
When you consider the man page of cleartool find, and the additional examples of cleartool find
-all generally for quite lengthy search, especially for large vob with a long history, so you want to add selection criteria to reduce the time, like '-type f' to only consider files.
'-print' isn't necessary, except if you want the list of all .exe changed, but the simple fact to print each element can slow down the operation considerably.
the additional quotations are needed to pick filenames that contain spaces, but you can use an escape notation, more readable: \"
ct doesn't exist unless you define the alias for cleartool (in windows: doskey ct=cleartool $*)
So:
ct find . -all -type f -name "*.bat" -exec "cleartool protect -chmod +x -file \"%CLEARCASE_PN%\""
I have a perl script which is used to process some data files from a given directory. I have written below bash script to look for the last updated file in the given directory and process that file.
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} \;
Sometimes, user copied multiple files to the data dir and hence the previous one skipped. The perl script execute only the last updated file. Can you please suggest me how to fix this using bash script.
Try
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} +
Note the termination of -exec with a + vs your \;
From the man page
-exec command {} +
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end;
Now that you'll have one or more file names passed into your perl script, you can alter your perl script to iterate over each passed in file name.
If I understood the question correctly, you need to process any files that were created or modified in a directory since the last time your script was run.
In my opinion find is not the right tool to determine those files, because it has no notion of which files it has already seen.
Using any of the -atime/-ctime/-mtime options will either produce duplicates if you run your script twice in the specified period, or miss some files if it is not executed at the right time. The timing intricacies of using these options for something like this are not easy to deal with.
I can propose a few alternatives:
a) Use three directories instead of one: incoming/ processing/ done/. Your users should only be allowed to put files in incoming/. You move any files in there to processing/ with a simple mv incoming/* processing/ before running your perl script. Then you move them from processing/ to done/ when its over.
In my opinion this is the simplest and best solution, and the one used by mail servers etc when dealing with this issue. If I were you and there were not any special circumstances preventing you from doing this, I'd stop reading here.
b) Have your finder script touch a special file (e.g. .timestamp, perhaps in a different directory, so that your users will not tamper with it) when it's done. This will allow your script to remember the last time it was run. Then use
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' ';'
to run your perl script for each file. You should modify your perl script so that it can run repeatedly with a different file name each time. If you can modify it to accept multiple files in one go, you can also run it with
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' +
which will minimise the number of ./script.pl processes. Take care to handle the first run of the find script, when the .timestamp file is missing. A good solution would be to simply ignore it by not using the -*newer options at all in that case. Also keep in mind that there is a race condition where files added after find was started but before touching the timestamp file will not be processed.
c) As a variation of (b), have your script update the timestamp with the time of the processed file that was created/modified most recently. This is tricky, because find cannot order its output on its own. You could use a wrapper around your perl script to handle this:
#!/bin/bash
for i in "$#"; do
find "$i" \( -cnewer .timestamp -o -newer .timestamp \) -exec touch -r '{}' .timestamp ';'
done
./script.pl "$#"
This will update the timestamp if it is called to process a file with a newer mtime or ctime, minimising (but not eliminating) the race condition. It is however somewhat awkward - unavoidable since bash's [[ -nt option seems to only check the mtime. It might be better if your perl script handled that on its own.
d) Have your script store each processed filename and its timestamps somewhere and then skip duplicates. That would allow you to just pass all files in the directory to it and let it sort out the mess. Kinda tricky though...
e) Since your are using Linux, you might want to have a look at inotify and the inotify-tools package - specifically the inotifywait tool. With a bit of scripting it would allow you to process files as they are added in the directory:
inotifywait -e MOVED_TO -e CLOSE_WRITE -m -r testd/ | grep --line-buffered -e MOVED_TO -e CLOSE_WRITE | while read d e f; do ./script.pl "$f"; done
This has no race conditions, as long as your users do not create/copy/move any directories rather than just files.
The perl script will only execute against the file which find gives it. Perhaps you should remove the -mtime -1 option from the find command so that it picks up all the files in the directory?
What's the easiest/best way to find and remove empty (zero-byte) files using only tools native to Mac OS X?
Easy enough:
find . -type f -size 0 -exec rm -f '{}' +
To ignore any file having xattr content (assuming the MacOS find implementation):
find . -type f -size 0 '!' -xattr -exec rm -f '{}' +
That said, note that many xattrs are not particularly useful (for example, com.apple.quarantine exists on all downloaded files).
You can lower the potentially huge number of forks to run /bin/rm by:
find . -type f -size 0 -print0 | xargs -0 /bin/rm -f
The above command is very portable, running on most versions of Unix rather than just Linux boxes, and on versions of Unix going back for decades. For long file lists, several /bin/rm commands may be executed to keep the list from overrunning the command line length limit.
A similar effect can be achieved with less typing on more recent OSes, using a + in find to replace the most common use of xargs in a style still lends itself to other actions besides /bin/rm. In this case, find will handle splitting truly long file lists into separate /bin/rm commands. The {} is customarily quoted to keep the shell from doing anything to it; the quotes aren't always required but the intricacies of shell quoting are too involved to cover here, so when in doubt, include the apostrophes:
find . -type f -size 0 -exec /bin/rm -f '{}' +
In Linux, briefer approaches are usually available using -delete. Note that recent find's -delete primary is directly implemented with unlink(2) and doesn't spawn a zillion /bin/rm commands, or even the few that xargs and + do. Mac OS find also has the -delete and -empty primaries.
find . -type f -empty -delete
To stomp empty (and newly-emptied) files - directories as well - many modern Linux hosts can use this efficient approach:
find . -empty -delete
find /path/to/stuff -empty
If that's the list of files you're looking for then make the command:
find /path/to/stuff -empty -exec rm {} \;
Be careful! There won't be any way to undo this!
Use:
find . -type f -size 0b -exec rm {} ';'
with all the other possible variations to limit what gets deleted.
A very simple solution in case you want to do it inside ONE particular folder:
Go inside the folder, right click -> view -> as list.
Now you'll find all the files listed as a list. Click on "Size" which must be a column heading. This will sort all the files based on it's size.
Finally, you can find all the files that have zero bites at the last. Just select those and delete it!