converting text file to gps track file - exiftool

Note:Question is edited according to suggestion
I want to geotag my images
im1.jpg
im2.jpg
Content of Images
I tried the solution with csv but getting this error
I have a csv file adata.csv
SourceFile,DateTimeOriginal,GPSLatitude,GPSLongitude,GPSLatitudeRef,GPSLongitudeRef
im1.jpg,1635.387709,52.23829321,10.54680910,52.23829321,10.54680910
im2.jpg,1645.892446,52.23828047,10.54680857,52.23828047,10.54680857
C:\EXIF>exiftool -csv=adata.csv Images
Error:
C:\EXIF>exiftool -csv=adata.csv Images
No SourceFile 'Images/im1.jpg' in imported CSV database
(full path: 'c:/exif/images/im1.jpg')
No SourceFile 'Images/im2.jpg' in imported CSV database
(full path: 'c:/exif/images/im2.jpg')
1 directories scanned
0 image files read

I don't know much about the gpx format but your example doesn't include timestamps, which are required for exiftool to be able to sync between images and the track. Another thing to watch for is the fact that the gpx timestamps are supposed to be in UTC, which may require some work to sync properly, especially if the timestamps in your text file are local time.
Instead, I'd suggest converting your TXT file to a CSV file and using the -csv option. Some simple changes would be required. The first column would need to be changed to filenames, which it looks like would only require adding .jpg to each number in the first column. The column header for the first column would need to be changed to SourceFile. The Time column could be removed, unless you need to add the timestamps to the image files, in which case I'd suggest changing the column header to DateTimeOriginal. The Latitude and Longitude column headers need to be changed to GPSLatitude and GPSLongitude. Finally, because GPS metadata is unsigned, you will need to set the reference tags. Duplicate the GPSLatitude and GPSLongitude columns and change the headers to GPSLatitudeRef and GPSLongitudeRef. This all should be relatively easy in a spreadsheet program such as Excel or LibreOffice.
At that point your new CSV file should look like this:
SourceFile,DateTimeOriginal,GPSLatitude,GPSLongitude,GPSLatitudeRef,GPSLongitudeRef
1.jpg,13:22:05,45.9874167,-76.875233,45.9874167,-76.875233
You could then run this command to fill the gps data
exiftool -csv=data.csv c:\Images

Related

Keeping whitespace in csv headers (Matlab)

So I'm reading in a .csv file, and it all works as I want bar one thing. The headers of the data have spaces, which I want later for displaying data to the user. However, these spaces get stripped when the csv file is read in via readtable (as they get used as the variable names). Again, no problem with this per se, but I still need the unmodified strings as well.
Two additional notes:
I'm happy for the strings to be stored separately from the main table if that makes things easier.
The actual .csv file I'm reading in is reasonably large (about 2 million data points) so from a computational cost side of things, the less reading of the file the better
Example read in code:
File = 'example.csv';
Import_Options = detectImportOptions( File, 'NumHeaderLines', 0 );
Data = readtable( File )
Example csv file (example.csv):
"this","is","an","example test"
"1","1","2","3"
"3","1","4","1"
"hot","hot","cold","hot"
You can simply read the first line with fgetl, thus grabbing the headers, before reading the entire file with readtable.

Extracting data from complex output text file using perl and placing into new text file

The complete output text file is hundreds of lines long, with relevant nuclear cross sections and a plethora of other data that I do not need for this particular problem. I am trying to extract the columns of data under "BURNUP" and the first "K-INF" from the file I attached. I am trying to extract this data and place it into a separate file. I am a newbie, and have a similar perl script from a professor. I have tried to adapt it to the information I am looking for but the only result I am receiving are the 2 print statements. Any suggestions?

Extracting file names from an online data server in Matlab

I am trying to write a script that will allow me to download numerous (1000s) of data files from a data server (e.g, http://hydro1.sci.gsfc.nasa.gov/thredds/catalog/GLDAS_NOAH10SUBP_3H/2011/345/). Unfortunately, the names of the files in each directory are not formatted in a similar way (the time that they were created were appended to the end of the file name). I need to be able to specify the file name to subset the data (I have a special tool for these data types) and download it. I cannot find a function in matlab that will extract the file names.
I have looked at URLREAD, but it downloads everything including html code.
Thanks for your help!
You can easily parse the link.
x=urlread(url)
links=regexp(x,'<a href=''([^>]+)''>','tokens')
Reads every link, you have to filter all unwanted links.
For example this gets all grb files:
a=regexp(x,'<a href=''([^>]+.grb)''>','tokens')

Talend tFileOutputdelimited component - problems with the split .csv files

I tried my luck on the Talend forum and no luck there, so I will try here as well.
I have a job that is reading a large table and then writing the data to .csv files in increments of 25000 rows. What I have noticed is that all .csv files created after the first .csv file have the data loaded all in one row versus the first .csv file that has the data loaded in 25000 rows (as I want it).
Is there a setting that needs to get set on the tFileOutputDelimited component that will allow for the rows in all subsequent .csv files to get loaded as they are in the first (and 'good') .csv file? I am thinking it may be due to what is being used for the 'Escape char' value on the 'Advance settings' tab but am not sure.
On the tFileOutputDelimited component's 'Basic settings' tab, the CSV Row Separator value is CRLF("\r\n") and the field separator is ",". On the component's 'Advanced settings' tab, the Escape char value is """ and the Text enclosure value also is """.
Also, this is being run in a Windows 7 environment.
Unfortunately the documentation I found for the tFileOutputDelimited component's 'Advance settings' tab is lacking in regards to the CSV options.
Below is an example of what is being encountered. As listed below, the first file looks great but all files that follow do not break on the line break and end up placing all of the data on one row versus individual rows.
File #1
header row
row 1
row 2
row 3
...
row 25000
File #2...
header rowrow1row2...row25000
File #3...
header rowrow1row2...row25000
If you need more details, let me know and I'll send them right off. Thank you in advance.
Figured it out. As mentioned in my initial post, the CSV Row Separator had been set to the CRLF("\r\n") option. I changed this to the LF("\n") and that addressed the problem. I had looked atthe generated java code and noticed that it was not treating the CRLF("\r\n") as one of the default options - only \n and \r were. This pointed me in the direction of trying the \n option.

Create PDF from CSV on iPhone

An iPhone app which I am creating generates reports from a Core Data database as a CSV file, which can then be emailed so that the user may use that data elsewhere outside of the app. I would also like to offer the ability to generate the same reports as a PDF file (of course, with nicer formatting) allowing the user to immediately print the report rather than having to jump through several hoops as with the CSV file - i.e. open in another application (e.g. Excel, Numbers) then reformat the columns (so they are wide enough for printing), bold the headings, etc.
Essentially, I want to provide the PDF file so that the user is immediately given a nicely formatted report, and they only need to export the CSV file if they wish to do data manipulation and need a format which is editable.
I was thinking that the easiest method would be taking the CSV file and the converting this into a PDF file, which would be the same as the CSV except would incorporate nicer formatting (such as a tabular layout) rather than the simple comma-separated format of the CSV file. I have been unable to find any ready-made classes for this purpose (to avoid reinventing the wheel) and I am unsure how to approach this since I have limited experience with this aspect of the SDK. Any suggestions or pointers in the right direction would be much appreciated.
You have two different problems:
Read CSV data into some structure in memory
Turn some structure in memory into a PDF
Aaron Saunders has posted some links for step 2, so here's a link for step 1:
http://github.com/davedelong/CHCSVParser
That's a CSV parser I wrote that will turn your CSV file into an NSArray of NSArrays of NSStrings.