Mail attachment from Unix script in loop - email

I have below sample script which was working fine until last week, however, not sure what has changed, I am not getting any attachments sent. I just get greet / text message but not any attachment. I tried with sample .txt as attachment which is success. But not .csv, not sure if csv's are being filtered by unix server, but I don't see any error message. Any idea how to track / check what is going wrong pls?
#!/bin/bash
FILES=/inbox/*.*
to="test#temp.com"
from="test#temp.com"
subject="Test Files"
filecount=`find $FILES -type f | wc -l`
totalfiles=" : Total "
subject=${subject}${totalfiles}${filecount}
body="Dear All,Please find the attached latest files."
echo $subject $filecount
declare -a attargs
for att in $(find $FILES -type f -name "*.*");do
#attaching all files.
attargs+=( "-a" "$att" )
done
mail -s "$subject" -r "$from" "${attargs[#]}" "$to" <<< "$body"**
Kind Regards
Kevin

Related

wget: Download image files from URL list that are >1000KB & strip URL parameters from filename

I have a text file C:\folder\filelist.txt containing a list of numbers, for example:
 
345651
342679
344000
349080
I want to append the URL as shown below, download only the files that are >1000KB, and strip the parameters after "-a1" from the filename, for example:
URL
Size
Output File
https://some.thing.com/gab/abc-345651-def-a1?scl=1&fmt=jpeg
1024kb
C:\folder\abc-345651-def-a1.jpeg
https://some.thing.com/gab/abc-342679-def-a1?scl=1&fmt=jpeg
3201kb
C:\folder\abc-342679-def-a1.jpeg
https://some.thing.com/gab/abc-342679-def-a1?scl=1&fmt=jpeg
644kb
-
https://some.thing.com/gab/abc-349080-def-a1?scl=1&fmt=jpeg
2312kb
C:\folder\abc-349080-def-a1.jpeg
This is the code I currently have, which works for downloading the files and appending the .jpeg extension, given the full URL is in the text file. It does not filter out the smaller images or strip the parameters following "-a1".
cd C:\folder\
wget --adjust-extension --content-disposition -i C:\folder\filelist.txt
I'm running Windows and I'm a beginner at writing batch scripts. The most important thing 'm trying to accomplish is to avoid downloading images <1000kb: it would be acceptable if I had to manually append the URL in the text file and rename the files after the fact. Is it possible to do what I'm trying to do? I've tried modifying the script by referencing the posts below, but I can't seem to get it to work. Thanks in advance!
Wget images larger than x kb
Downloading pdf files with wget. (characters after file extension?)
Spider a Website and Return URLs Only
#change working directory
cd /c/folder/
#convert input file list to unix
dos2unix filelist.txt
for image in $(cat filelist.txt)
do
imageURL="https://some.thing.com/gab/abc-$image-def-a1?scl=1&fmt=jpeg"
size=`wget -d -qO- "$imageURL" 2>&1 | grep 'Content-Length' | awk {'print $2'}`
if [[ $size -gt 1024000 ]] ;then
imgname="/c/folder/abc-$image-def-a1.jpeg"
wget -O $imgname $imageURL
fi
done

Calling Patch.exe from Powershell with p0 argument

I'm currently trying to patch some files via PowerShell using Patch.exe
I am able to call the exe using the '&' command, but it doesn't seem to be reading my p0 input. I'm not an expert on PowerShell and any help would be appreciated!
here is what I am calling in PS:
$output = & "$scriptPath\patch.exe" -p0 -i $scriptPath\diff.txt
My error reads:
can't find file to patch at input line 5
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
Which I can emulate by leaving out the p0 parameter on my patch file from commandline.
Here are some alternatives I've already tried:
#$output = & "$scriptPath\patch.exe" -p0 -i "$scriptPath\diff.txt"
#CMD /c “$scriptPath\patchFile.bat” (where patchFile.bat has %~dp0patch.exe -p0 < %~dp0diff.txt, seems like powershell reads < as 0<, so there is an error there I think)
#GET-CONTENT $scriptPath\diff.txt | &"$scriptPath\patch.exe" "-p0"
#GET-CONTENT $scriptPath\diff.txt | CMD /c “$scriptPath\patch.exe -p0”
Thanks!
Try:
$output = & "$scriptPath\patch.exe" -p0 -i "$scriptPath\diff.txt"
Patch.exe started in the wrong context and I solved this by using Push-Location / Pop-Location
Here is what my code looks like now
Push-Location $scriptPath
$output = & "$scriptPath\patch.exe" -p0 -i "$scriptPath\diff.txt"
Pop-Location
Keith also mentioned in one of his comments that you can use:
[Environment]::CurrentDirectory = $pwd
I have not tested this, but I assume it does the same thing (Keith is a powershell MVP, I am just a student).

Recursively replace colons with underscores in Linux

First of all, this is my first post here and I must specify that I'm a total Linux newb.
We have recently bought a QNAP NAS box for the office, on this box we have a large amount of data which was copied off an old Mac XServe machine. A lot of files and folders originally had forward slashes in the name (HFS+ should never have allowed this in the first place), which when copied to the NAS were all replaced with a colon.
I now want to rename all colons to underscores, and have found the following commands in another thread here: pitfalls in renaming files in bash
However, the flavour of Linux that is on this box does not understand the rename command, so I'm having to use mv instead. I have tried using the code below, but this will only work for the files in the current folder, is there a way I can change this to include all subfolders?
for f in *.*; do mv -- "$f" "${f//:/_}"; done
I have found that I can find al the files and folders in question using the find command as follows
Files:
find . -type f -name "*:*"
Folders:
find . -type d -name "*:*"
I have been able to export a list of the results above by using
find . -type f -name "*:*" > files.txt
I tried using the command below but I'm getting an error message from find saying it doesn't understand the exec switch, so is there a way to pipe this all into one command, or could I somehow use the files I exported previously?
find . -depth -name "*:*" -exec bash -c 'dir=${1%/*} base=${1##*/}; mv "$1" "$dir/${base//:/_}"' _ {} \;
Thank you!
Vincent
So your for loop code works, but only in the current dir. Also, you are able to use find to build a file with all the files with : in the filename.
So, as you've already done all this, I would just loop over each line of your file, and perform the same mv command.
Something like this:
for f in `cat files.txt`; do mv $f "${f//:/_}"; done
EDIT:
As pointed out by tripleee, using a while loop is a better solution
EG
while read -r f; do mv "$f" "${f//:/_}"; done <files.txt
Hope this helps.
Will

Parse thousands of xml files with awk

I have several thousand files and they each contain only one very long line.
I want to convert them all to one file with one entry per line split at the ID fields and I have this working with a few files but it takes too long on hundreds of files and seems to crash on thousands of files. Looking for a faster way that is unlimited.
(find -type f -name '*.xml' -exec cat {} \;) | awk '{gsub("ID","\nID");printf"%s",$0}'
I have also tried this..
(find -type f -name '*.xml' -exec cat {} \;) | sed 's/ID/\nID/g'
I think the problem is trying to use replacement instead of insertion or it is using too much memory.
Thanks
I can't test it with thousand of files, but instead of cat all data into memory before processing them with awk, try to run awk with some of those files at a time, like:
find . -type f -name "*.xml*" -exec awk '{gsub("ID","\nID");printf"%s",$0}' {} +
Create a list of all files you need to process
Divide this list into smaller lists each including 50 files
Create a script that reads a sub-list and outputs an intermediate file,
doing the ID thing also
create another script that executes the script in 3, 20 process at a time, as many as necessary, as background processes
merge the output files

File movement issue on NFS file system on Unix box

Currently there are 4.5 million files in a single directory on an NFS file system. As a result any read or write operation on that directory is causing a huge delay.
In order to over come this problem, all the files in that directory will be moved onto different directories based on the year of its creation.
Apparently, the find command that we are using with the -ctime option is not working because of the huge file volume.
We tried listing the files based on the year of creation and then feed the list to a script that will move them in a for loop. But even this failed as ls -lrt went for a hang.
Is there any other way to tackle this problem?
Please help.
Script contents:
1) filelist.sh
ls -tlr|awk '{print $8,$9,$6,$7}'|grep ^2011|awk '{print $2,$1,$3,$4}' 1>>inboundstore_$1.txt 2>>Error_$1.log
ls -tlr|awk '{print $8,$9,$6,$7}'|grep ^2011|wc -l 1>>count_$1.log
2) filemove.sh
INPUT_FILE=$1 ##text file which has the list of files from the previous script
FINAL_LOCATION=$2 ##destination directory
if [ -r $INPUT_FILE ]
then
for file in `cat $INPUT_FILE`
do
echo "TIME OF FILE COPY OF [$file] IS : `date`" >> xyz/IBSCopyTime.log
mv $file $FINAL_LOCATION
done
else
echo "$INPUT_FILE does not exist"
fi
Use the readdir iterator.