Unix: Command to return code - find

Here is the code I use to find and remove older files that are older than 180 days, where
PPATH=./land/arch and PERIOD=180
find "$PPATH" -maxdepth 1 -type f -mtime +"$PERIOD" -exec rm -f {} \;
rt_CD=$?
echo $rt_CD
Regardless of whether there is a file in the directory or not I'm getting the return code as 0.
Why is this?
If there are files it works and returns 0 and if there is no file it still returns 0.

The man page says:
find exits with status 0 if all files are processed
successfully, greater than 0 if errors occur. This is
deliberately a very broad description, but if the return value is
non-zero, you should not rely on the correctness of the results of
find.
All zero files were processed correctly, therefore a zero status is returned. It is working as specified.
If you want to detect if there are no files, you will have to capture the file-list first and then check to see if it is empty or not before sending it to rm.

Related

cleartool find difference recursive in certain file type to predecessor

In my script I'm calling ClearCase to check if any file of a certain file type in the current path (including all subfolders) has been changed. I'm not familiar with cleartool at all, the command should like this:
cleartool diff -predecessor -recursive *.filetype
As a return value I only need a bool: true if any differences exist, false if nothing has changed
As a return value I only need a bool: true if any differences exist, false if nothing has changed
You need a script. A simple find + exec won't be enough, because the exit status won't be exactly what you need.
#! /bin/bash
noChange=0 ### "cleartool diff" exit status means no difference
files=$(cleartool find . -name "*.filetype")
for f in ${file}; do;
cleartool diff -pre -options "-status_only" "${f}"
if [ $? != $noChange ]; then
exit 0
fi
done
exit 1
(0 is true, false is 1 in shell)
Note the use, in cleartool diff of the -options parameter:
opt/ions pass-through-opts
Specifies one or more compare method options that are not directly supported by diff.
That way, you get only the status from cleartool diff, which is precisely what you want to test in your case.
Since your previous question shows you have Git too, you can easily execute that script in a bash session, even on Windows.
On first glance, the command you're looking for is something like this:
cleartool find . -name "*.filetype" -exec 'cleartool diff -pred "$CLEARCASE_PN"'
On windows, it would look like this:
cleartool find . -name "*.filetype" -exec "cleartool diff -pred \"%CLEARCASE_PN%\""
The single quotes in the Unix example are significant since otherwise the "CLEARCASE_PN" environment variable reference would be expanded at the time you start the command and not when the -exec is actually fired.
The \"%CLEARCASE_PN%\" construct in the Windows command line and the "$CLEARCASE_PN" on unix is to account for filenames including spaces.
Now, looking at what you want... I'm not sure it would accomplish the underlying purpose, whatever that is.

Using find to open all files in subdirectories

Could you please help me with find syntax. I'm trying to replicate the effect of this command, which opens all files in each of the specified subdirectories:
open mockups/dashboard/* mockups/widget/singleproduct/* mockups/widget/carousel/*
I'd like to make it work for any set of subdirectories below mockups.
I can show all subdirectories with:
find mockups -type d -print
But I'm not sure how to use xargs to add in the "*". Also, I don't want to separately execute open for each file with "-exec open {} \;", because that launches 50 different copies of Preview, when what I need is one instance of Preview with the 50 files loaded into it.
Thanks!
The version of find I have at hand allows to specify a + sign after the -exec argument:
From the man page:
-exec command {} +
This variant of the -exec action runs the specified command on the
selected files, but the command line is built by appending each
selected file name at the end; the total number of invocations of
the command will be much less than the number of matched files.
The command line is built in much the same way that xargs builds
its command lines. Only one instance of `{}' is allowed within
the command. The command is executed in the starting directory.
That means that as few instances of open will be executed as possible, e.g.:
find mockups -type f -exec open {} +

Stat and file not modified in last minutes

I need to get with stat unix command (or similar like find) possibly on one line of command all file in a folder that ARE NOT changed in the last 5 minutes for example.
I found a lot of examples with opposite: search file in a dir modified in last 3 minutes or similar.
What I need is to find files that are NOT changed (using modification time os size in bytes) in last x minutes.
Is possible to do that?
Stefano
find supports the -not operator for any option.
So use the most appropriate find command you've found and put -not in there.
Try this:
find . -maxdepth 1 -not -mmin -5

Why does grep hang when run against the / directory?

My question is in two parts :
1) Why does grep hang when I grep all files under "/" ?
for example :
grep -r 'h' ./
(note : right before the hang/crash, I note that I see some "no such device or address" messages , regarding sockets....
Of course, I know that grep shouldn't run against a socket, but I would think that since sockets are just files in Unix, it should return a negative result, rather than crashing.
2) Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, I'm looking for all recently written log files.
As #ninjalj said, if you don't use -D skip, grep will try to read all your device files, socket files, and FIFO files. In particular, on a Linux system (and many Unix systems), it will try to read /dev/zero, which appears to be infinitely long.
You'll be waiting for a while.
If you're looking for a system log, starting from /var/log is probably the best approach.
If you're looking for something that really could be anywhere in your file system, you can do something like this:
find / -xdev -type f -print0 | xargs -0 grep -H pattern
The -xdev argument to find tells it to stay within a single filesystem; this will avoid /proc and /dev (as well as any mounted filesystems). -type f limits the search to ordinary files. -print0 prints the file names separated by null characters rather than newlines; this avoid problems with files having spaces or other funny characters in their names.
xargs reads a list of file names (or anything else) on its standard input and invokes the specified command on everything in the list. The -0 option works with find's -print0.
The -H option to grep tells it to prefix each match with the file name. By default, grep does this only if there are two or more file names on its command line. Since xargs splits its arguments into batches, it's possible that the last batch will have just one file, which would give you inconsistent results.
Consider using find ... -name '*.log' to limit the search to files with names ending in .log (assuming your log files have such names), and/or using grep -I ... to skip binary files.
Note that all this depends on GNU-specific features. Some of these options might not be available on MacOS (which is based on BSD) or on other Unix systems. Consult your local documentation, and consider installing GNU findutils (for find and xargs) and/or GNU grep.
Before trying any of this, use df to see just how big your root filesystem is. Mine is currently 268 gigabytes; searching all of it would probably take several hours. A few minutes spent (a) restricting the files you search and (b) making sure the command is correct will be well worth the time you spend.
By default, grep tries to read every file. Use -D skip to skip device files, socket files and FIFO files.
If you keep seeing error messages, then grep is not hanging. Keep iotop open in a second window to see how hard your system is working to pull all the contents off its storage media into main memory, piece by piece. This operation should be slow, or you have a very barebones system.
Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, Im looking for all recently written log files.
Grepping the whole FS is very rarely a good idea. Try grepping the directory where the log files should have been written; likely /var/log. Even better, if you know anything about the names of the files you're looking for (say, they have the extension .log), then do a find or locate and grep the files reported by those programs.

Find files modified within one hour in HP-UX

I'm searching through the manual page for find I can't see a way to run a command which will find all files modified within an hour. I can see only a way to do it for days.
Guess this should do
find / -type f -mmin -60
This will be listing files starting from root and modified since the past 60 mins.
the best you can do in HP-UX using the find command is to look for everything that was modified in the last 24 hours. The HP-UX find command only checks modified time in 24-hour increments. This is done by:
find / -type f -mtime 1
This will list all of the filed recursively starting in the root directory that were modified in the last 24 hours. Here's the entry from the man page on the -mtime option:
-mtime n
True if the file modification time subtracted from the initialization time is n-1 to n multiples of 24 h. The initialization time shall be a time between the invocation of the find utility and the first access by that invocation of the find utility to any file specified in its path operands.
If you have the permissions to create the file, use this:
touch -t YYYYMMDDHHMM temp
Then use the -newer option
find . -newer temp
This should list the files newer than the temp file which can be created one hour ago.