i am using exiftool to change meta data in an image. Here is a mwe:
#!/bin/bash
EXIF=exiftool
$EXIF -LensModel="Bubble Teleskop on Marsmission" $1
This is working with many entries, Model, Longitude, Latitude, etc.
But now i try to change the "XMP Toolkit" with
$EXIF -xmptoolkit='Paint' $1
or so, but every time i try to change the string, only the original name and version of the exiftool version is inserted.
Some ideas?
Thanks
Put all your changes on a single command:
$EXIF -Lens="XSD II 50" -xmptoolkit='Paint' -LensMake="Tamron" $1
Exiftool can execute multiple changes in a single command.
XMPToolkit is always going to get updated whenever you change some XMP data.
You can also update XMPToolkit as the last item in your batch or add -tagsfromfile # -XMPToolkit to your commands after the command in which you set XMPToolkit. The tagsfromfile # option will recopy any tags that appear after it back into the file.
Related
I am running a matlab-script that produces a figure. To save this figure I use:
print(h_f,'-dpng','-r600','filename.png')
What this means is that if I don't change filename for each time I run the script, the figure filename.png will be overwritten.
Is there a way to save a figure to a default name, e.g. untitled.png, and then when the script is run twice it will make a new figure untitled(1).png instead of overwriting the original one?
You could create a new filename based on the number of existing files
defaultName = 'untitled';
fileName = sprintf('%s_%d.png', defaultName, ...
length(dir([defaultName '_*.png'])));
print(h_f,'-dpng','-r600', fileName)
Add a folder path to your dir search path if the files aren't located in your current working directory.
This will create a 0-index file name list
untitled_0.png
untitled_1.png
untitled_2.png
untitled_3.png
...
You could also use tempname to generate a long random name for each iteration. Unique for most cases, see section Limitations.
print(h_f,'-dpng','-r600', [tempname(pwd) '.png'])
The input argument (pwd in the example) is needed if you do not want to save the files in your TEMPDIR
You can try something like this:
for jj=1:N
name_image=sscanf('filename','%s') ;
ext=sscanf('.png','%s') ;
%%do your stuff
filename=strcat(name_image,num2str(jj),ext);
print(h_f,'-dpng','-r600',filename)
end
If you want to execute your script multiple time (because you don't want to use a "for") just declare a variable (for example jjthat will be incremented at the end of the script:
jj=jj+1;
Be careful to don't delete this variable and, when you start again your script, you will use the next value of jj to compose the name of the new image.
This is just an idea
we are processing multiple files using external table. Is there any way I can get the file name being processed in external tables and stored it in database table?
Only workaround I can find is appending the file name to every record in the flat file which isn't ideal when huge dataset and multiple files.
Can anyone help on this
Thanks
No, the file name is simply never passed from the gpfdist daemon back to Greenplum. So you have to append the file name to each line - you can use gpfdist transformation for doing so
I was struggling with this as well, here's my solution. Please note I'm not an expert in linux, so there may be a one liner solution.
So I wanted to add a filename column in front of my records.
That can be done in sed, I've created a transform.sh file, with the following content:
#/bin/sh
filename=$1
#echo $filename >> transform.txt
sed -e "s|^|$filename\v|" $filename
Please note that I was using vertical tab as a delimiter, \v. Also in the filename you could have / hence using | . In order to have the value of $filename we have to use double quites for sed.
Test it, it looks good.
./transform.sh countersamples-2016-03-02--11-51-10.csv
countersamples-2016-03-02--11-51-10.csv
timestamp
machine
category
instance
name
value
countersamples-2016-03-02--11-51-10.csv
2016-03-02 11:51:10.064
DESKTOP-4PLQKVL
Memory
% Committed Bytes In Use
74.8485488891602
This part is done, lets continue with gpfdist. We need a yaml file that can be passed to gpfdist, I named this transform.yaml
Content:
---
VERSION: 1.0.0.1
TRANSFORMATIONS:
add_filename:
TYPE: input
CONTENT: data
COMMAND: /bin/bash transform.sh %filename%
Please note that we have the %filename% value here. It seems that gpfdist prefilters the files that needs to be handled, and passes them 1 by 1 to our transform.
Lets fire up gpfdist:
gpfdist -c transform.yaml -v
Now go into greenplum and create an external table such as:
CREATE READABLE EXTERNAL TABLE "ext_transform"
(
"filename" text,
"timestamp" timestamp without time zone ,
"machine" text ,
"category" text ,
"instance" text ,
"name" text ,
"value" double precision
)
LOCATION ('gpfdist://localhost:8080/*/countersamples*.csv#transform=add_filename')
FORMAT 'TEXT'
( HEADER DELIMITER '\013' NULL AS '\\N' ESCAPE AS '\\' )
And when we select data from it:
select * from "ext_transform";
We see:
I've created 2 folders to see how it reacts if the files are not in the same folder as the transform. This way I can distinguish between the 2 files, even if their data is identical.
I need to prepare list of strings for translation of my iPhone application.
I have extracted strings from *.m files using genstring and from the XIB files using ibtool command.
But I have also lots of texts to translate in plist files (String field types enclosed in string tag).
Is there a nice bash script / command to extract those strings into a flat txt file?
I could review and filter it so my translators can work with nice list but not with alien looking XML file.
I made a custom shell script which tries to figure out the values needed. You can then use the localize.py script in a modified way (see below) to automatically create the translation files. (The line break where somehow very important) If there more entities to be translated, the shell script can be modified accordingly
#!/bin/bash
rm -f $2
sed -n 'N;/<key>Title<\/key>/{N;/<string>.*<\/string>/{s/.*<string>\(.*\)<\/string>.*/\/* \1 *\/\
"\1" = "\1";\
/p;};}' $1 >> $2
sed -n 'N;/<key>FooterText<\/key>/{N;/<string>.*<\/string>/{s/.*<string>\(.*\)<\/string>.*/\/* \1 *\/\
\"\1" = "\1";\
/p;}
;}' $1 >> $2
sed -n 'N;/<key>Titles<\/key>/{N;/<array>/{:a
N;/<\/array>/!{
/<string>.*<\/string>/{s/.*<string>\(.*\)<\/string>.*/\/* \1 *\/\
\"\1" = "\1";\
/p;}
ba
;};};}' $1 >> $2
the localize.py script needed some modification. Therefore I created a small package containing the localizer for the source code and for the plist Files. The new script even supports Duplikates (meaning it will kick them)
We recently made a small online application to do that, please take a look on: http://www.icapps.be/plist-translator/
I can't think of any command off the top of my head. However, plists are glorified xml files and there are various parsers available for them.
It shouldn't be too difficult to create a simple python script to get all the strings from the file.
Does this help?
http://www.icanlocalize.com/site/tutorials/how-to-translate-plist-files/
We much prefer paying clients who use our translation system with our translators, but you can translate yourself in our GUI at no charge.
My project needs couple of things to be extracted from ClearCase data using the Perl script in a excel sheet,those are -
By giving two particular time line or two baseline.
all the activity associated within that baseline (column header "activity")
Owner's id (column header-Owner)
all the element associated within a particular activity. (column header-"element details")
For each element the versions associated (column header-"Versions")
for each element the total number of lines of code,total number of lines of code added,total number of lines of code deleted,total number of lines of code changed..(column header"No. of lines of code","lines of code added","lines of code deleted" & " lines of code changed")
Please kindly help me on this...
Basically, ClearCase Perl scripting is based on parsed outputs of system and cleartool commands.
The scripts are based on a cleartool run cmd like package CCCmd, and used like:
use strict;
use Config;
require "path/to/CCCmd.pm";
sub Main
{
my $hostname = CCCmd::RunCmd('hostname');
chomp $hostname;
my $lsview = CCCmd::ClearToolNoError("lsview -l -pro -host $hostname");
return 1;
}
Main() || exit(1);
exit(0);
for instance.
So once you have the basic Perl structure, all you need is the right cleartool commands to analyze, based on fmt_ccase directives.
1/ all the activity associated within that baseline (column header "activity")
ct descr -fmt "%[activities]CXp" baseline:aBaseline.xyz#\ideapvob
That will give you the list of activities (separated by ',').
For each activity:
2/ Owner's id (column header-Owner)
ct descr -fmt "%u" activity:anActivityName#\ideapvob
3/ all the element associated within a particular activity. (column header-"element details")
Not sure: activities can list their versions (see /4), not easily their elements
4/ For each element the versions associated (column header-"Versions")
For a given activity:
ct descr -fmt "%[versions]CQp\n" activity:anActivityName#\ideapvob
5/ for each element the total number of lines of code,total number of lines of code added,total number of lines of code deleted,total number of lines of code changed..(column header"No. of lines of code","lines of code added","lines of code deleted" & " lines of code changed")
That can be fairly long, but for each version, you can compute the extended path of the previous version and make a diff.
I would advise using for all that a dynamic view, since you can access any version of a file from there (as opposed to a snapshot view).
Also if you need to use perl with Clearcase have a look at the CPAN module ClearCase::CtCmd. I would recommend to use this perl module for invoking clearcase commands.
For the CCCmd package, I had to remove the double-quotes in the RunCmd and RunCmdNoError subs to get it to work.
I am using RRDtool for storing data for displaying graphs. I update the RRD by RRDs::update and this fails when trying to rewrite the information, means update data for a time in the past (e.g. someone moved the system timer back). The error I get is:
ERROR: Cannot update /opt/dashboard/rrd/Disk/192.168.120.168_disk_1.rrd with
'1228032301:24:24' illegal attempt to update using time 1228032301 when last
update time is 1228050001 (minimum one second step)
I want to always allow the rewrite, how can I do this?
rrdtool does not write your input into the rrd file. It rather samples what you enter and then stores the resulting datapoints. So providing 'old data' to rrdtool update will not work in the same way, as you can not easily skip back in a sound recording to 'fix' a few bad notes.
Obviously there are ways to alter old data, the way todo this in rrdtool, is to 'dump' the rrd file to xml, modify the content and 'restore' it. Not something one would like todo on a regular basis.
I use following script in such situations:
#!/bin/sh
rrdtool dump "$1" | perl -ne 'BEGIN {$t=`date +%s`; chomp($t);} $a=$_; if ($a =~ /lastupdate.\d+..lastupdate/) { $a =~ s/(lastupdate.)\d+(..lastupdate)/$1$t$2/; } print $a' | rrdtool restore -f - "$1"
It's a little... freaky, but i could not find another automatic solution.
According to the RRD documentation, that timestamp number must increase with each update. Given your constraints, I'd modify your update routine so that if the update fails, you catch the exception and redo the update with the time field set to 'N'. That will make RRDtool use the current time as the update time.
Alternatively, if you don't want to deal with the catch-and-retry code, just modify your update code to always use 'N' as the time value -- then the update will always work.
It may be helpful to have a quick look at the documentation for the RRDtool update command.