To set the file modification date of images to the exif date, I tried the following:
exiftool '-FileModifyDate<DateTimeOriginal' image.jpg
But this gives me an error about SetFileTime.
So maybe exiftool cannot do it in linux.
Can I combine
exiftool -m -p '$FileName - $DateTimeOriginal' -if '$DateTimeOriginal' -DateTimeOriginal -s -S -ext jpg . with "touch --date ..."?
See this Exiftool Forum post.
The command used there is (take note of the use of backticks, not single quotes):
touch -t `exiftool -s -s -s -d "%Y%m%d%H%M.%S" -DateTimeOriginal TEST.JPG` TEST.JPG
But I'm curious about your error. Exiftool should be able to set the FileModifyDate on Linux (though FileCreateDate is a different story). What version of Exiftool are you using (exiftool -ver to check)?
Another possibility is that the DateTimeOriginal tag is malformed or doesn't have the full date/time info in it.
FWIW, StarGeek's answer was a great pointer in the right direction, but it did not work for me: many of my photos were reported to have "Invalid EXIF text encoding" (no obvious difference compared to those that were "fine"), even though exiftool somefile.jpg would clearly output a valid "Modify Date".
So this is what I did:
for i in *.jpg ; do d=`exiftool $i | grep Modify | sed 's/.*: //g'` ; echo "$i : $d" ; done
...to produce output like this:
CAM00786.jpg : 2013:11:19 18:47:27
CAM00787.jpg : 2013:11:25 08:46:08
CAM00788.jpg : 2013:11:25 08:46:19
...
It was enough for me to output the timestamps next to the file names, but given a little bit of date-time formatting, it could easily be used to "touch" the files to modify their filesystem timestamps.
Related
I want the explanation for the below command as I am not able to understand functionality of -d in sed
Does it list the file based on the modified time of the file or does it list the files if the count of files is more than 90 in the specified path.
Filename format is same for all the file
Filename: 20171010220002.txt
FILES_TO_RETAIN=90
ls -1t /apps/feroz/*.txt|sed "1,${FILES_TO_RETAIN:-90}d"
I know function ls and sed. ls functionality is explained below
-1 list only the filename
-t list the file based on the modified time, new file will on the top
As per man page for sed the explanation for -d is provided below.
-d Delete pattern space. Start next cycle.
There is no option -d in sed. The first parameter to sed is taken as the sed-script.
Here d is the command in the sed-script and is one of the Commands which accept address ranges.
1,90 is the address range for the command d.
sed "1,${FILES_TO_RETAIN:-90}d" will delete line 1 to 90 from the input, which in your case is the file-list
Conclusion: The resulting output is the list of files sorted by modified time (newest on top), excluding the first 90 files (newest ones).
side-note: using tail +<n> would do the same (but excluding n-1 lines).
I tried '-N' and '--no-clobber' but the only result that I get is to retrieve a new copy of the existing example.exe with number a number added using this synax 'example.exe.1'. This is not what I'd like to get. I just need to download and overwrite the file example.exe in the same folder where I already saved a copy of example.com without that wget verifies if the mine is older or newer respect the on example.exe file already present in my download folder. Do you think is i possible or I need to create a script that delete the example.exe file or maybe something that change his modification date etc?
If you specify the output file using the -O option it will overwrite any existing file.
For example:
wget -O index.html bbc.co.uk
Run multiple times will keep over-writting index.html.
wget doesn't let you overwrite an existing file unless you explicitly name the output file on the command line with option -O.
I'm a bit lazy and I don't want to type the output file name on the command line when it is already known from the downloaded file. Therefore, I use curl like this:
curl -O http://ftp.vim.org/vim/runtime/spell/fr.utf-8.spl
Be careful when downloading files like this from unsafe sites. The above command will write a file named as the connected web site wishes to name it (inside the current directory though). The final name may be hidden through redirections and php scripts or be obfuscated in the URL. You might end up overwriting a file you don't want to overwrite.
And if you ever find a file named ls or any other enticing name in the current directory after using curl that way, refrain from executing the downloaded file. It may be a trojan downloaded from a rogue or corrupted web site!
wget --backups=1 google.com
renames original file with .1 suffix and writes new file to the intended filename.
Not exactly what was requested, but could be handy in some cases.
-c or --continue
From the manual:
If you use ‘-c’ on a non-empty file, and the server does not support
continued downloading, Wget will restart the download from scratch and
overwrite the existing file entirely.
I like the -c option. I started with the man page then the web but I've searched for this several times. Like if you're relaying a webcam so the image needs to always be named image.jpg. Seems like it should be more clear in the man page.
I've been using this for a couple years to download things in the background, sometimes combined with "limit-rate = " in my wgetrc file
while true
do
wget -c -i url.txt && break
echo "Restarting wget"
sleep 2
done
Make a little file called url.txt and paste the file's URL into it. Set this script up in your path or maybe as an alias and run it. It keeps retrying the download until there's no error. Sometimes at the end it gets into a loop displaying
416 Requested Range Not Satisfiable
The file is already fully retrieved; nothing to do.
but that's harmless, just ctrl-c it. I think it's always gotten the file I wanted even if wget runs out of retries or the connection temporarily goes away. I've downloaded things for days at a time with it. A CD image on dialup, yes, always with wget.
My use case involves two different URLs, sometimes the second one doesn't exist, but if it DOES exist, I want it to overwrite the first file.
The problem of using wget -O is that, when the second file DOESN'T exist, it will overwrite the first file with a BLANK file.
So the only way I could find is with an if statement:
--spider checks if a file exists, and returns 0 if it does
--quiet fail quietly, with no output
-nv is quiet, but still reports errors
wget -nv https://example.com/files/file01.png -O file01.png
# quietly check if a different version exists
wget --quiet --spider https://example.com/custom-files/file01.png
if [ $? -eq 0 ] ; then
# A different version exists, so download and overwrite the first
wget -nv https://example.com/custom-files/file01.png -O file01.png
fi
It's verbose, but I found it necessary. I hope this is helpful for someone.
Here is an easy way to get it done with parameter trimming
url=https://example.com/example.exe ; wget -nv $url -O ${url##*/}
Or you can use basename
url=https://example.com/example.exe ; wget -nv $url -O $( basename $url )
For those who do not want to use -O and want to specify the output directory only, the following command can be used.
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"
the first command will download from the source with the wget command
the second command will remove the older file
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"; \
rm '$file.1' -f;
I'm trying to find some stuff in a large number of text files, and I want the output to be in a file so I can read it at leisure:
grep -i 'alter table' *.sql >> tables.txt
grep (this is the Windows version of the Gnu tool) complains at the >>. I've tried piping and all the rest, and there doesn't saeem to be an option to define an output file either.
Any ideas?
Reviving this old question, but it's among the first Google results.
grep outputs differently, so I needed to add this option to ouptut the results to a file:
grep --line-buffered
Source
This works here:
grep -i "other something" *.txt >> tables.txt
I have a directory containing a bunch of files, some text some binary, with no consistent naming. I want to search and replace a string in text files only. So I went with:
perl -i -pne 's#/some/text/to/replace#/replacement/text#' *
Remove the -i option and you will see that binary files get caught. How do I modify this one-liner to skip binary files?
ack -n --text --sort -f . | xargs perl -i -pne 's…'
Abusing ack goes much quicker than writing your own solution with -T.
Well, this is all based on what your definition of a text file is. Perl 5 has the -T filetest operator that will tell you if a filename or filehandle is a text file (using Perl 5's definition):
perl -i -pne 'BEGIN{#ARGV=grep-T,#ARGV}s#regex#replacement#' *
The BEGIN block will filter out any files that don't pass the -T test, so they won't even be read (except for their first block because that is what -T uses to determine if they are text).
From perldoc -f -X
The -T and -B switches work as follows. The first block or so of the file is examined for odd characters such as strange control codes or characters with the high bit set. If too many strange characters (>30%) are found, it's a -B file; otherwise it's a -T file. Also, any file containing a zero byte in the first block is considered a binary file. If -T or -B is used on a filehandle, the current IO buffer is examined rather than the first block. Both -T and -B return true on an empty file, or a file at EOF when testing a filehandle. Because you have to read a file to do the -T test, on most occasions you want to use a -f against the file first, as in next unless -f $file && -T $file .
Let's say there's a.gz, and b.gz.
$ gzip_merge a.gz b.gz -output c.gz
I'd like to have this program. Of course,
$ cat a.gz b.gz > c.gz
doesn't work. Because the final DEFLATE block of a.gz has BFINAL, and the GZIP header of b.gz. (Refer to RFC1951, RFC1952) But if you unset BFINAL, throw away the second GZIP header and walk through the byte boundaries of the second gzip file, you can merge it.
In fact, I thought of writing an open source program for this matter, but didn't know how to publish it. So I asked the Joel to be my program manager, and I walked him through my explanation and defense, he finally understood what I wanted to do, but said he was too busy. :(
Of course, I could write one myself and try my way to publish it. But I can't do this alone because my day work belongs to the property of my employer.
Is there any volunteers? We could work as programmer(me), publisher(you) or programmer(you), publisher(me). All I need is some credit. I once implemented a Universal Decompressor Virtual Machine described in RFC3320. So I know this is feasible.
OR, you could point me to THAT program. It would be very useful for managing log files like merging 365 (day) gzipped log files to one. ;)
Thanks.
Of course, cat a.gz b.gz > c.gz doesn't work.
Actually, it works just fine. I just tested it. It's even documented (sort of) in the gzip man page.
Multiple compressed files can be concatenated. In this case, gunzip
will extract all members at once. For example:
gzip -c file1 > foo.gz
gzip -c file2 >> foo.gz
Then
gunzip -c foo
is equivalent to
cat file1 file2
You could also:
zcat a.gz b.gz > c.txt && gzip c.txt
as long as your Linux/Unix distribution has zcat built in, which most of them do (and you could install it for the ones that do not.)
Alternatively:
zcat a.gz b.gz | gzip -c > c.txt.gz