I am trying to write a new value for a XMP tag using exiftool but for some reason the tag is not being recognized.
Reading the field works:
exiftool -PropertyId /Users/user/test.jpg
Property Id : 17934
But when trying to write a value for PropertyId tag, does not work.I did try also to use -xmp:PropertyId but I get the same result:
exiftool -PropertyId=12345 /Users/user/test.jpg
Warning: Tag 'PropertyId' is not defined
Nothing to do.
Exporting the metadata shows that the field is there: (I only copied the xmp section)
exiftool -xmp -b -a /Users/user/test.jpg > data.xmp
...
<rdf:Description rdf:about=''
xmlns:xmp='http://ns.adobe.com/xap/1.0/'>
<xmp:Brand>Brand Name</xmp:Brand>
<xmp:CreateDate>2015-07-08T11:45:21</xmp:CreateDate>
<xmp:CreatorTool>CreatorTool</xmp:CreatorTool>
<xmp:FacilityName>The Restaurant Name</xmp:FacilityName>
<xmp:MetadataDate>2015-09-14T13:12:51-06:00</xmp:MetadataDate>
<xmp:ModifyDate>2015-09-14T13:12:51-06:00</xmp:ModifyDate>
<xmp:PropertyId>00000</xmp:PropertyId>
<xmp:PropertyName>Property Name</xmp:PropertyName>
<xmp:ShootDate>2016-03-12</xmp:ShootDate>
</rdf:Description>...
I am missing something? Test file is here: test.jpg
Exiftool cannot edit metadata it does not have a definition for, as it is in this case. In fact, your example XMP shows a lot of tags which it says are part of the "xap" group but are not actually part of that (very old) standard, including Brand, FacilityName, PropertyName, and ShootDate. You'll find that none of those are directly editable by exiftool. Probably not by any other program except the one that originally wrote it.
If you want exiftool to be able to write those tags, you'll need to create definitions for those tags. See the ExifTool Example Config file for details.
Also take note, as I said, "xap" is a very old standard and has long since been replaced. Exiftool will update the tags it does know to the newer standards. For details see the XMP xmp tags entry.
Related
Years ago, I wrote a small app in itext2 to gather reports on a weekly basis and concatenate them into one PDF. The app used com.lowagie.text.pdf.PdfCopy to copy and merge the PDFs. And it worked fine. Performed exactly as expected.
A few weeks ago I looked into migrating the application to itex7. To that end, I used the copyPagesTo method of com.itextpdf.kernel.pdf.PdfDocument. When run on the same file set, this produces warnings like:
WARN PdfNameTree - Name "section.1" already exists in the name tree; old value will be replaced by the new one.
When I click on the link to "section.1" in the first document of the merged PDF, I am taken to "section.1" of the last document. Not what I expected and not what happens when using the itext2 app. In the PDF's produced by itext2, if I click on the link to "section.1" of the first document in the combined PDF, I am taken to section 1 of the first document.
There is a hint in Javadocs for copyPagesTo saying
If outlines destination names are the same in different documents, all
such outlines will lead to a single location in the resultant
document. In this case iText will log a warning. This can be avoided
by renaming destinations names in the source document.
There is however, no explanation of how this should be done. I find it odd that this should be necessary in itext7, although it wasn't in itext2.
Is there a simple way to get around his problem?
I've also tried the Sejda desktop app and it produces correct results, but I would prefer to automate the process through a batch script.
My guess is iText 2 didn't even know it might be a problem.
If iText can't deduplicate destination names, the procedure is roughly:
Follow /Catalog -> /Names -> /Dests in each document to find the destination name tree.
Deduplicate the names, by adding suffixes. Remember that a name with a suffix added might be equal to an existing name in the same or another document. Be careful!
Now you can rewrite the destination name trees. Since you have only used suffixes, you can do this in place - the lexicographic ordering of the names is unaltered so the search tree structure is not broken.
Now, rewrite destination links in each PDF for the new names. For example any dictionary entry with key /Dest, or any /D in a /GoTo action.
Now, after all this preprocessing, the files will merge without name clashes.
(I know all this because I've just implemented it for my own PDF software. It's slightly hairy stuff, but not intractable.)
If you like, I can provide a devel version of cpdf with this functionality, if you would like to test it.
I try to add a thumbnail to a JPEG picture using libexif.
For now I'm borrowing the code from exif (the command line tool that is shipped by the libexif team).
However I noticed the XMP tags get deleted from the metadata. There is an old bugreport here.
I tried to see how to achieve this anyway with libexif but I don't really understand how to get the XMP from input file and put it in the output file. I just want to copy all XMP data, I don't need to extract anything of it.
I saw there is a TAG EXIF_TAG_XML_PACKET in exif_tag.h but couldn't figure out how to read/write this tag.
A related solution is in this SO answer but it looks complicated. I'm not familiar coding in C.
Is it actually possible to keep all XMP when using only libexif API? Have things changed in recent years on that? How would you write this in code?
Thanks
I believe it should be somewhat straightforward. XMP fields are described in the ISO/Adobe standard. Regular Kotlin/Java/Android file I/O and some string manipulation should be all that is required.
I would start out by becoming intimately familiar with ISO 16684-1:2019. Then, write a method for your jpeg file class that grabs all the XMP fields. Store those fields in a temp file (to prevent difficult to recover data loss in the event of your code or libexif crashing). Hand the file off to libexif. Generate the thumbnail. Finally, when that's done you can restore the XMP fields. If the thumbnail is stored in an XMP field as well (and it sounds like it is), it may be easier to concatenate that field with the other ones which were already grabbed, updating the temp file so that it contains EVERY XMP field, before adding all of the XMP fields back to the jpeg.
Unfortunately, I do not currently have the time to read a 50 page ISO standard, synthesize the information, and then write the code to implement the solution. Here's a link to the standard at least, to get you started.
https://www.iso.org/obp/ui/#iso:std:iso:16684:-1:ed-2:v1:en
I'm sorry if this doesn't have enough information. I don't typically ask for help online like this.
I'm using DITA Open Toolkit 3.4 on Windows. I generated a plugin called "vcr2" using Jarno's (very excellent and helpful) PDF Plugin Generator and then made a handful of customizations. The plugin uses the pdf2 plugin as a base. When I try to use the vcr2 plugin, my images are not working. I've tracked the problem down to malformed image filenames in the image's href attribute.
For example:
In my source file (a DITA Task), the markup for one of my images looks like this:
<image href="MyRemindersChooseReminder.png"/>
If I run a transform with the pdf2 plugin, the images work fine. In the merged stage1.xml file in the Temp folder, the XML for that same image looks like this:
<image class="- topic/image " href="df2d132af27436c59c5c8c4282e112d62bec8201.png" placement="inline" xtrc="image:1;10:66" xtrf="file:/V:/Vasont/Extract/t12340879-minimal/t12340879.xml"/>
It is processed into a file Topic.fo, and looks like this:
<fo:external-graphic
src="url('file:/V:/Vasont/Extract/t12340879-minimal/MyRemindersChooseReminder.png')"/>
Everything works fine and the image looks fine.
If I run the same file through my 'vcr2' plugin, which just calls the same pdf2 plugin with some overrides, all the images get broken:
stage1.xml
<image class="- topic/image " href="df2d132af27436c59c5c8c4282e112d62bec8201.png" placement="inline" xtrc="image:1;10:66" xtrf="file:/V:/Vasont/Extract/t12340879-minimal/t12340879.xml"/>
Topic.fo
<fo:external-graphic
src="url('file:/V:/Vasont/Extract/t12340879-minimal/df2d132af27436c59c5c8c4282e112d62bec8201.png')"
/>
As I track this down further, it appears that somewhere in the map-reader Ant task, this filename gets changed to that cryptic string of pseudo-hexadecimal. I think later on it's supposed to be changed back or resolved to a complete URI or something.
So, the two-part question is: Why does Open Toolkit change my filenames, and what's supposed to change them back?
DITA-OT's preprocess uses hashes for temporary filenames because it allows the code to not deal with directory structures. This enables preprocess to work in so-called "map-first" mode, where it first processes all DITA map resources and only then starts to process DITA topic and image resources.
The preprocess has a step called clean-preprocess that can rewrite the temporary file names to match source resource files names. However, this rewrite operation is disabled for PDF output because the original file names are not used for anything in that output type.
I'm using JSDoc tutorials, as described in http://usejsdoc.org/about-tutorials.html
I can't get the configuration options to work, and I haven't yet found any examples or discussion of the topic online.
In my trivial example, I pass a tutorials parameter to JSDoc, and it builds a tutorial file for tutorial1.md. I would like to give this tutorial a display name of "First Tutorial".
Following this from the online docs:
Each tutorial file can have additional .js/.json file (with same name, just different extension) which will hold additional tutorial configuration. Two fields are supported:
title overrides display name for tutorial with the one specified in it's value (default title for tutorial is it's filename).
children which holds list of sub-tutorials identifiers.
I tried several attempts at adding tutorial1.json, or tutorial1.js but either JSDoc pukes on the json file, or doesn't seem to recognize anything I throw at it in the js file.
Can anyone suggest a simple format for a tutorial configuration file that should work?
(Edited to include one .js file I tried, which did not change the tutorial display name.)
tutorial1.md
First Tutorial
================
tutorial1.js
/**
#title First Tutorial
*/
#title = "First Tutorial";
title = "First Tutorial";
It works with .json extension but not with a .js extension. jsdoc does not seem to try to even read the file if a .js extension is used. So the doc seems incorrect.
With a .json extension, and using jsdoc 3.2.2, the following works:
{
"title": "Custom Title"
}
Your .json file needs to be in the same directory as your .md file and have the same name (ignoring extensions).
I have a large amount of code that I'm running doxygen against. To improve performance I'm trying to break it into modules and merge the result into one set of docs. I thought tag files would do the trick, but either I have it configured wrong or I'm misunderstanding how it works.
The directories are laid out:
root +
|-src+
| |-a
|
|-doc+
|-a.dox
|-main.dox
|-main.md
|-output+
|-a+
| |-html
|-main+
|-html
In addition to 'a' there are other peer directories but am starting with one.
a.dox generates output and a tag file into root/doc/output
OUTPUT_DIRECTORY=output/a
GENERATE_TAGFILE = output/a/a.tag
INPUT=../src/a
main.dox just inputs the markdown file that has a mainpage tag and refers to the other projects tag file.
OUTPUT_DIRECTORY=output/main
INPUT = main.md
TAGFILES=output/a/a.tag=output/a/html
Should this merge or link all the docs under main where I can browse 'a' globals, modules, pages, etc? Or does this only generate links to 'a' if I explicitly cross-reference a documented entity in 'a' from inside of 'main'?
If this should work, any thoughts on where my syntax is incorrect? I've tried various ways to define TAGFILES, is the output directory relative to the main.dox file? To the a.tag file? Or to the a/html directory?
If I'm off base an TAGFILES don't work this way, is there another way to merge sets of doxygen directories into one?
Thanks.
I suggest you read this topic on how I recommend to use tag files and the conditions that should apply: https://stackoverflow.com/a/8247993/784672
To answer your first question: doxygen will in general not merge the various index files together (then no performance would be gained). Although for a part you can still get external members in the index by setting ALLEXTERNALS to YES.
Doxygen will (auto)link symbols from other sources imported via a tag file. So in general you should divide your code into more or less self-contained modules/components/libraries, and if one such module depends on another, then import its tag file so that doxygen can link to the other documentation set. If you run doxygen twice (once for the tag file and once for the documentation) you can also resolve cyclic dependencies if you have them.
In my case I made a custom index page with links to all modules, and made a custom entry in the menu of each generated page that linked back to this index (see http://www.doxygen.nl/manual/customize.html#layout) how to add a user defined entry to the navigation menu/tree.