How does Adobe Fireworks store vector information, pages, and layers in PNG format? - png

How is fireworks able to store this extra information in a format that is otherwise flat raster. And is there any open source way to write similar vector, layered, paginated files in Png format that would be readable by fireworks?

The PNG format allows for ancillary data chunks to store metadata aside from the image itself. I don't believe anyone's actually worked out the format that Adobe stores that data in though.

Related

Converting georeferenced .png files to .ecw using IrfanView

Is there anyway I can preserve image georeferencing information after processing or converting the image via IrfanView?
I have a couple of .png files with spatial reference and I'm trying to save them as ecw images. Unfortunately this process destroys the spatial reference vital for using the images in GIS software. If there was a way around this problem it would be a great help.
Thanks.
I would recommend using gdal_translate to handle all your geospatial translation needs.
Assuming that you have a PNG with a world file associated then you can easily convert to ecw (providing that the propitiatory ecw driver is compiled in).

Matlab access PDF as an array of images

Building a system which search for a specific region in the picture, and saves it. Everything works fine. Mostly I am going to extract these regions from pdf books.
So I am looking for a solution to treat PDF file in matlab as an array of images (each page is an image). Up till now the only thing I have found is how to open pdf files in matlab.
The best solution I came up with is to export PDF as many PNG images and iterate through them. There is nothing bad with these idea, but I am wondering am I missing something
Judging from this page it appears to be impossible to import pdf directly into matlab:
And a quick file exchange search for 'pdf import' only offers an attempt to extract text, rather than the images.
So all in all your approach of saving the pdf as images and then importing them seems to be the way to go.
I agree with Salvador Dali and Dennis. To convert each page of the PDF to a png image, I downloaded imagemagick and followed the commands here:
https://aleksandarjakovljevic.com/convert-pdf-images-using-imagemagick/
Specifically:
convert -density 150 -antialias "input_file_name.pdf" -resize 1024x -quality 100 "output_file_name-%03d.png"
Of course, there are other discussion about using ImageMagick for this purpose:
Converting a PDF to PNG and
Convert PDF to PNG using ImageMagick
This is an old thread, but it's the one I found when I asked the same question, so I thought I would elaborate in case it's helpful to future users who also land on this thread.

How to read pdf table content data?

I have a requirement to read a pdf file having tabular format data only like in excel file. I need to extract the cell value of given pdf file.
Is it be anyhow possible using itext API. If you have something to share then please share it or any other solutions?
The PDF format is just a canvas where text and graphics are placed without any structure information. As such there aren't any iText-objects in a PDF file. In each page there will probably be a number of Strings, but you can't reconstruct a phrase or a paragraph using these strings. There are probably a number of lines drawn, but you can't retrieve a Table-object based on these lines.
In short: parsing the content of a PDF-file is NOT POSSIBLE with iText.
You can try this! This lets you read PDF pages.
I recently ran into this problem. I wasn't able to make it work with itext.
An alternate solution I found was to open a PDF document in Adobe and export it to xml. At least with my PDF's it preserved the table information and then I was able to programmatically work with the XML to generate tabular files like excel etc.
The other issue I ran into was that Adobe only lets you export one file at a time and I had lots of files. Luckily Adobe also has a merge function. I ended up merging all the files together and then exporting them as one big XML file and working with that file to generate what I needed.

iPhone - how to store documents consisting of multiple images?

My iPhone (actually, iPad) app creates documents that consist of several images, plus a bit of metadata. What's the best practice for storing these sorts of documents on disk? I see two main options:
Create a folder for each document, and store my images as separate PNG files within the folder (plus another little file for the metadata).
Create a single file which contains all images and metadata.
But I'm not sure how to easily do option 2. I think I can convert my images in PNG format to/from NSData, but then what? I'm still a newbie at Cocoa, but I believe I saw something about stuffing mixed data into some NSSomethingOrOther and having this write itself out to disk, and read itself back in later. Does this ring a bell with anyone? And, will it work with large binary blobs of data like my images?
Or would you recommend I simply go with option 1?
Simply go with option1. It's clean, elegant, and simple to implement. You could even use (a subset of) HTML.
TIFFs and PDFs can have multiple pages.
Creating a document centric iPhone/iPad application with own file format using ZipArchive

How to get EXIF data from my jpegs?

I have to link a date and a name to some jpegs that I am including in my bundle, or possibly downloading from my own server to the Documents folder. Is there a way to extract EXIF data easily?
If so, then I will use EXIF to store this info. If not, then I will have to create a database or flat file that maps my extra data to the image file.
Keep in mind, these are not photos the iPhone has taken and is providing via UIImagePicker or from outside the sandbox. These are photos that I am including with the app or downloadig to the Docs folder myself. The important point here is ease:
Is it easier to
read EXIF file from my image files
have another file that keeps track of
the image file and the associated
data (could be sqlite)
Thanks!
You can try using iphone-exif toolkit to extract the data. However, it's licensed GPL and if your app is commercial you'll need to negotiate a license deal. If that's not viable then you may want to go the external meta-data route.
The actual EXIF data is stored in the form of a small TIFF file with EXIF-specific TIFF tags for information that doesn't have a home in the TIFF specification. When placed in a JPEG file (really a JFIF bitstream), it is stored in a JPEG APP1 marker which limits the total size of the EXIF data to just a bit less than 64KB.
It shouldn't be that difficult to locate the APP1 marker, confirm it contains EXIF data, and then parse out a specific collection of EXIF tags with fairly brute force coding.
One example you can look at is exiftool which does just that, and is written in Perl and open source under the same terms as Perl itself.
If these files are purely for use in your own application and will not be reused in other tools by the user, then there is some mileage in storing your data as XML/JSON in the comment segment 0xFFFE. As mentioned before you get just short of 64k to play with.
The beauty of using the comment segment is that it should be preserved by image editing tools, is quick to access (because you do not need to traverse the IFD blocks that store EXIF data, you just read/write a text string with 4 byte type/length header) and is human readable/writable in a graphics app.
I would avoid storing the associated data in a db if practical, so that you don't risk the db becoming out of sync with the available files.
I use ExifTool
embedded in my app. Works a treat.