inotifywait output always has filename having .filepart - inotify

I am using inotifywait to monitor a large file transfer using WinScp:
inotifywait --event close_write --event moved_to --format '%w%f %e %T'
--timefmt '%F %T' $watchFolder | while read eventOutputInfo do
echo "eventOutputInfo is:" $eventOutputInfo
but it always prints out the filename with .filepart at the end. Under the target directory, after the transfer is done, it has the correct file name without the .filepart though. And I am not sure why the event moved_to was not in the output.
/root/p/file.filepart CLOSE_WRITE,CLOSE 2015-12-08 14:56:16
Can someone please let me know what event I should watch for so that the .filepart is not part of the filename in the inotifywait output ? Thanks.

You can run inotifywait with the monitor switch to observe what happens throughout the lifecycle of the file transfer just to get an idea of what events are triggered. For me:
inotifywait -m .
produced the following output when I copied a file via Dolphin file manager:
./ CREATE filename.part
./ OPEN filename.part
./ MODIFY filename.part
./ MODIFY filename.part
./ MODIFY filename.part
... repeated many times ...
./ MODIFY filename.part
./ MODIFY filename.part
./ MODIFY filename.part
./ CLOSE_WRITE,CLOSE filename.part
./ MOVED_FROM filename.part
./ MOVED_TO filename
./ ATTRIB filename
./ ATTRIB filename
./ OPEN,ISDIR
./ CLOSE_NOWRITE,CLOSE,ISDIR
./ OPEN,ISDIR
./ CLOSE_NOWRITE,CLOSE,ISDIR
So maybe it is one of those events that you are looking for. The .part or .filepart extension is a normal side effect of file transfers. I can't say why the MOVED_TO event didn't trigger for you, but if you experiment with the monitor switch (-m) you might be able to find an explanation.

Related

Move file with a dash

I move file using Midnight Commander to file with name "-name.csv". But 'mc' thinks I use option. Why is this happening? And how I can move to file with name like "-name.csv".
desktop:~/s$ mv name.csv "-name.csv"
mv: invalid option -- 'a'
It's not mc, it's mv. Quoting doesn't help because the quotes are interpreted by the shell so mv receives unquoted parameters name.csv and -name.csv. You need to hide the dash so that option parser in mv stops thinking it's an option. Use relative path ./ for the current directory, or full path:
mv name.csv ./-name.csv
mv name.csv "`pwd`"/-name.csv

sed -n function calling in same line repeatedly

I'm a complete novice wrt unix and writing shell scripts, so apologies if the solution to my problem is quite banal.
Essentially though, I'm working on a shell script that reads from a TextEdit file called "sursecout.txt", and runs it through another script called "sursec.x" (where sursec.x is simply a series of FORTRAN integrations). It then creates a folder named after a certain Jacobi integral ("CJ ="), and stores the ten SurSec[n] files there (where n = integer). My problem is that the different folders are created correctly with appropriate names, but are each filled with identical output files. My suspicion is that something is wrong with my sed command, in that it's reading the same two lines over and over again (where as it should be reading the first two lines of sursecout.txt, then next two, etc.)
Here are the first two folders I want to make, but I have 30 so any help would be appreciated.
./sursec.x < ./sursecout.txt
sed -n '1,2p;3q' sursecout.txt
cd ..
mv ./data ./CJ=3.029990
mkdir data
cd SurSec
./sursec.x < ./sursecout.txt
sed -n '3,4p;5q' sursecout.txt
cd ..
mv ./data ./CJ=3.030659
mkdir data
cd SurSec

What order does find(1) list files in?

On extfs, if there are only file-creations and no -deletions in a directory, I expect that find . -type f would list the files either in their chronological order of creation (or mtime), or if not, at least in their reverse chronological order... depending on how a directory's contents are traversed.
But that isn't the behavior I'm seeing.
The following code, eg, creates a fresh set of directories and files:
#!/bin/bash -u
for i in a/ a/{1,2,3,4,5} b/ b/{1,2,3,4,5}; do
if echo "$i" | egrep -q "/$"; then
echo "Creating dir $i"
mkdir -p "$i"
else
echo "Creating file $i"
touch "$i"
fi
sleep 0.500
done
Output of the above snippet:
Creating dir a/
Creating file a/1
Creating file a/2
Creating file a/3
Creating file a/4
Creating file a/5
Creating dir b/
Creating file b/1
Creating file b/2
Creating file b/3
Creating file b/4
Creating file b/5
However, find lists files in somewhat random order. For example, a/2 doesn't follows a/1, and b/2 doesn't follow b/1:
$ find . -type f
./a/1
./a/3
./a/4
./a/2 <----
./a/5
./b/1
./b/3
./b/4
./b/2 <----
./b/5
Any idea why this should happen?
My main problem is: I have a very large volume storing 100s of 1000s of files. I need to traverse these files and directories in the order of their creation/modification (mtime) and pipe each file to another process for further processing. But I don't necessarily want to first create a temporary list of this large set of files and then sort it based on mtime before piping it to my process.
find lists objects in the order that they are reported by the underlying filesystem implementation. You can tell ls to show you this "raw" order by passing it the -f option.
The order could be anything at all -- alphabetical, by mtime, by atime, by length of name, by permissions, or something completely different. The ordering can even vary from one listing to the next.
It's common for filesystems to report in an order that reflects the filesystem's strategy for allocating directory slots to files. If this is some sort of hash-based strategy based on filename then the order can appear nonsensical. This is what happens with widely-used Linux and BSD filesystem implementations. Since you mention extfs this is probably what causes the ordering you're seeing.
So, if you need the output from find to be ordered in a particular way then you'll have to create that order yourself. Maybe based on something like:
find . -type f -exec ls -ltr --time-style=+%s {} \; | sort -n -k6

Recursively replace colons with underscores in Linux

First of all, this is my first post here and I must specify that I'm a total Linux newb.
We have recently bought a QNAP NAS box for the office, on this box we have a large amount of data which was copied off an old Mac XServe machine. A lot of files and folders originally had forward slashes in the name (HFS+ should never have allowed this in the first place), which when copied to the NAS were all replaced with a colon.
I now want to rename all colons to underscores, and have found the following commands in another thread here: pitfalls in renaming files in bash
However, the flavour of Linux that is on this box does not understand the rename command, so I'm having to use mv instead. I have tried using the code below, but this will only work for the files in the current folder, is there a way I can change this to include all subfolders?
for f in *.*; do mv -- "$f" "${f//:/_}"; done
I have found that I can find al the files and folders in question using the find command as follows
Files:
find . -type f -name "*:*"
Folders:
find . -type d -name "*:*"
I have been able to export a list of the results above by using
find . -type f -name "*:*" > files.txt
I tried using the command below but I'm getting an error message from find saying it doesn't understand the exec switch, so is there a way to pipe this all into one command, or could I somehow use the files I exported previously?
find . -depth -name "*:*" -exec bash -c 'dir=${1%/*} base=${1##*/}; mv "$1" "$dir/${base//:/_}"' _ {} \;
Thank you!
Vincent
So your for loop code works, but only in the current dir. Also, you are able to use find to build a file with all the files with : in the filename.
So, as you've already done all this, I would just loop over each line of your file, and perform the same mv command.
Something like this:
for f in `cat files.txt`; do mv $f "${f//:/_}"; done
EDIT:
As pointed out by tripleee, using a while loop is a better solution
EG
while read -r f; do mv "$f" "${f//:/_}"; done <files.txt
Hope this helps.
Will

Using SAS and mkdir to create a directory structure in windows

I want to create a directory structure in Windows from within SAS. Preferably using a method that will allow me to specify a UNC naming convention such as:
\\computername\downloads\x\y\z
I have seen many examples for SAS on the web using the DOS mkdir command called via %sysexec() or the xcommand. The nice thing about the mkdir command is that it will create any intermediate folders if they also don't exist. I successfully tested the below commands from the prompt and it behaved as expected (quoting does not seem to matter as I have no spaces in my path names):
mkdir \\computername\downloads\x\y\z
mkdir d:\y
mkdir d:\y\y
mkdir "d:\z"
mkdir "d:\z\z"
mkdir \\computername\downloads\z\z\z
mkdir "\\computername\downloads\z\z\z"
The following run fine from SAS:
x mkdir d:\x;
x 'mkdir d:\y';
x 'mkdir "d:\z"';
x mkdir \\computername\downloads\x;
x 'mkdir \\computername\downloads\y';
But these do not work when run from SAS,eg:
x mkdir d:\x\x;
x 'mkdir d:\y\y';
x 'mkdir "d:\z\z"';
x mkdir \\computername\downloads\x\y\z ;
x 'mkdir "\\computername\downloads\z"';
** OR **;
%sysexec mkdir "\\computername\downloads\x\y\z ";
** OR **;
filename mkdir pipe "mkdir \\computername\downloads\x\y\z";
data _null_;
input mkdir;
put infile;
run;
It does not work. Not only this but the window closes immediately even though I have options xwait specified so there is no opportunity to see any ERROR messages. I have tried all methods with both the UNC path and a drive letter path, ie. D:\downloads\x\y\z.
If I look at the error messages being returned by the OS:
%put %sysfunc(sysrc()) %sysfunc(sysmsg());
I get the following:
-20006 WARNING: Physical file does not exist, d:\downloads\x\x\x.
Looking at the documentation for the mkdir command it appears that it only supports creating intermediate folders when 'command extensions' are enabled. This can be achieved with adding the /E:ON to cmd.exe. I've tried changing my code to use:
cmd.exe /c /E:ON mkdir "\\computername\downloads\x\y\z"
And still no luck!
Can anyone tell me why everyone else on the internet seems to be able to get this working from within SAS except for me? Again, it works fine from a DOS prompt - just not from within SAS.
I'd prefer an answer that specifically address this issue (I know there are other solutions that use multiple steps or dcreate()).
I'm running WinXP 32Bit, SAS 9.3 TS1M2. Thanks.
Here is a trick that uses the LIBNAME statement to make a directory
options dlcreatedir;
libname newdir "/u/sascrh/brand_new_folder";
I believe this is more reliable than an X statement.
Source: SAS trick: get the LIBNAME statement to create folders for you
You need to use the mkdir option -p which will create all the sub folders
i.e.
x mkdir -p "c:\newdirectory\level 1\level 2";
I'm on WinXP as well, using SAS 9.3 TS1M1. The following works for me as advertised:
122 options noxwait;
123 data _null_;
124 rc = system('mkdir \\W98052442n3m1\public\x\y\z');
125 put rc=;
126 run;
rc=0
NOTE: DATA statement used (Total process time):
real time 1.68 seconds
cpu time 0.03 seconds
That's my actual log file; "public" is a Windows shared folder on that network PC and the entire path was created. Perhaps using the SYSTEM function did the trick. I never ever use the X command myself.
You need to quote your x commands, e.g.
x 'mkdir "c:\this\that\something else"' ;
Also, I've never had a problem using UNC paths, e.g.
x "\\server.domain\share\runthis.exe" ;
This seems to work just fine with the dos window remaining open. You may need the XSYNC option. I am using 9.3 TS1M1 64 bit under VMWARE on a MAC:
options xwait xsync;
x mkdir c:\newdirectory;