UNIX: How to move a last created file to a certain directory through terminal - mv

I am able to get the file name of a last created/modified file in a current directory with this command:
ls -t | head -n1
then the obtained file name I use it with mv command to move it to a directory.
and I'm trying to do it like this:
mv $(ls -t | head -n1) directory/
But it doesn't move the file.
What am I doing wrong?

Maybe like this:
mv "$(ls -t | head -n1)" directory/

Related

find with xargs runs successfully but no changes to file

I run a command to batch process image files using resmushit:
find /home/user/image/data -type f -print0 | xargs -n 1 -P 10 -0 resmushit -q 85 --preserve-filename
The command runs successfully and tells me the files were optimized and saved however when I check the files in the folder there is no change.
edit: it looks like the problem might be with resmushit. When I run it on pictures within my working directory it works. i.e
resmushit -q 85 --preserve-filename test.jpg
Is there a way to make xargs or a different command to run the command within each folder recursively?
I ended finding for directories and using a bash file so:
find /home/user/image/data -type d -print0 | xargs -n 1 -P 10 -0 bashscript
and the script is:
#!/bin/sh
cd "$*"
resmushit -q 85 --preserve-filename *

how to create folder recursively

there are two file path .
now i am in eee folder
/Volumes/aaa/bbb/ccc/ddd/eee/text.txt
1) i am going to mv text.txt file to below path
/Volumes/aaa/bbb/ccc/ddd_1/eee/text.txt
or just create folder structure only
/Volumes/aaa/bbb/ccc/ddd_1/eee/
with below command with recursively , but not works all
there are some solution related with this .
but not works .
mkdir -p $(`pwd | sed 's/ddd/ddd_l/'`)
or
rsync -av -f"+ */" -f"- *" "$pwd" "$(`pwd | sed 's/ddd/ddd_l/'`)"
or
mv test.MTS `pwd | sed 's/ddd/ddd_l/'`
or
cp -R test.MTS `pwd | sed 's/ddd/ddd_l/'`
who can do this ?
Try this:
new_file="/Volumes/aaa/bbb/ccc/ddd/eee/text.txt"
mkdir -p "$(dirname "$new_file")"
touch "$new_file"

gsutil command to delete old files from last day

I have a bucket in google cloud storage. I have a tmp folder in bucket. Thousands of files are being created each day in this directory. I want to delete files that are older than 1 day every night. I could not find an argument on gsutil for this job. I had to use a classic and simple shell script to do this. But the files are deleting very slowly.
I have 650K files accumulated in the folder. 540K of them must be deleted. But my own shell script worked for 1 day and only 34K files could be deleted.
The gsutil lifecycle feature is not able to do exactly what I want. He's cleaning the whole bucket. I just want to delete the files regularly at the bottom of certain folder.. At the same time I want to do deletion faster.
I'm open to your suggestions and your help. Can I do this with a single gsutil command? or a different method?
simple script I created for testing (I prepared to delete bulk files temporarily.)
## step 1 - I pull the files together with the date format and save them to the file list1.txt.
gsutil -m ls -la gs://mygooglecloudstorage/tmp/ | awk '{print $2,$3}' > /tmp/gsutil-tmp-files/list1.txt
## step 2 - I filter the information saved in the file list1.txt. Based on the current date, I save the old dated files to file list2.txt.
cat /tmp/gsutil-tmp-files/list1.txt | awk -F "T" '{print $1,$2,$3}' | awk '{print $1,$3}' | awk -F "#" '{print $1}' |grep -v `date +%F` |sort -bnr > /tmp/gsutil-tmp-files/list2.txt
## step 3 - After the above process, I add the gsutil delete command to the first line and convert it into a shell script.
cat /tmp/gsutil-tmp-files/list2.txt | awk '{$1 = "/root/google-cloud-sdk/bin/gsutil -m rm -r "; print}' > /tmp/gsutil-tmp-files/remove-old-files.sh
## step 4 - I'm set the script permissions and delete old lists.
chmod 755 /tmp/gsutil-tmp-files/remove-old-files.sh
rm -rf /tmp/gsutil-tmp-files/list1.txt /tmp/gsutil-tmp-files/list2.txt
## step 5 - I run the shell script and I destroy it after it is done.
/bin/sh /tmp/gsutil-tmp-files/remove-old-files.sh
rm -rf /tmp/gsutil-tmp-files/remove-old-files.sh
There is a very simple way to do this, for example:
gsutil -m ls -l gs://bucket-name/ | grep 2017-06-23 | grep .jpg | awk '{print $3}' | gsutil -m rm -I
There isn't a simple way to do this with gsutil or object lifecycle management as of today.
That being said, would it be feasible for you to change the naming format for the objects in your bucket? That is, instead of uploading them all under "gs://mybucket/tmp/", you could append the current date to that prefix, resulting in something like "gs://mybucket/tmp/2017-12-27/". The main advantages to this would be:
Not having to do a date comparison for every object; you could run gsutil ls "gs://mybucket/tmp/" | grep "gs://[^/]\+/tmp/[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}/$" to find those prefixes, then do date comparisons on the last portion of of those paths.
Being able to supply a smaller number of arguments on the command line (prefixes, rather than the name of each individual file) to gsutil -m rm -r, thus being less likely to pass in more arguments than your shell can handle.

Using results from grep to write results line by line with sed

I am trying to take every file name in a directory that has the extension .text
and write it to a file in that same directory line by line starting at line number 14.
This is what I have so far but doesn't work.
cp workDir | grep -r --include *.text | sed -i '14i' home.text
Any assistance is appreciated. Note: I am on Unix.
You can do the above task by following shell command:
find workDir -name "*.text" >> home.text
This will solve what you have commented.
cp workDir doesn't work, because cp is for copying like cp source destination. Further explanations about cp can be read with man cp.
To achieve your goal go to your directory with cd ~/path/to/workDir. There you can use the ls command and redirect the output appending to your existing file with >> for all .text file extensions.
For example like this:
ls *.text >> home.text
This will append only the filenames line by line to your home.text without a preceding ./ like in the answer bevor with the find command.
Let me know If you like another format for your file names you want to append.

comparing two directories with separate diff output per file

I'd need to see what has been changed between two directories which contain different version of a software sourcecode. While I have found a way to get a unique .diff file, how can I obtain a different file for each changed file in the two directories? I'd need this, as the "main" is about 6 MB and wanted some more handy thing.
I came around this problem too, so I ended up with some lines of a shell script. It takes three arguments: Source and destination directory (as used for diff) and a target folder (should exist) for the output.
It's a bit hacky, but maybe it would be useful for someone. So use with care, especially if your paths have special characters.
#!/bin/sh
DIFFARGS="-wb"
LANG=C
TARGET=$3
SRC=`echo $1 | sed -e 's/\//\\\\\\//g'`
DST=`echo $2 | sed -e 's/\//\\\\\\//g'`
if [ ! -d "$TARGET" ]; then
echo "'$TARGET' is not a directory." >&2
exit 1
fi
diff -rqN $DIFFARGS "$1" "$2" | sed "s/Files $SRC\/\(.*\?\) and $DST\/\(.*\?\) differ/\1/" | \
while read file
do
if [ ! -d "$TARGET/`dirname \"$file\"`" ]; then
mkdir -p "$TARGET/`dirname \"$file\"`"
fi
diff $DIFFARGS -N "$1/$file" "$2/$file" > "$TARGET"/"$file.diff"
done
if you want to compare source code it is better to commit it to a source vesioning program as "svn".
after you have done so. do a diff of your uploaded code and pipe it to file.diff
svn diff --old svn:url1 --new svn:url2 > file.diff
A bash for loop will work for you. The following will diff two directories with C source code and produce a separate diff for each file.
for FILE in $(find <FIRST_DIR> -name '*.[ch]'); do DIFF=<DIFF_DIR>/$(echo $FILE | grep -o '[-_a-zA-Z0-9.]*$').diff; diff -u $FILE <SECOND_DIR>/$FILE > $DIFF; done
Use the correct patch level for the lines starting with +++