How can I convert large number of EPS files to PNG with a desired sizing? - png

I have bulk images in EPS having varied dimensions, all stored in a one folder.
How can I convert them all to PNG having a fixed dimension of 4500x5400 and make them store in another folder?
Can this process be done in one go, instead of manually doing it for each image?

Related

How to load DICOM pixel data in browser preserving HU values?

I need to display the DICOM images in a browser. This requires, the DICOM to be converted to PNG (or any other compatible) format.
I also need to calculate some overlay pixels based on dynamic input from the user. On conversion to PNG, I am getting 4 values (Alpha, R, G, B). But I can not use these values for my calculations. I need the original HU values from the DICOM images.
Is there any way that, PNG can contain the original DICOM values. I heard that using monochromatic 16 bit PNG format it is possible. How do we do that?
Alternatively, how to load DICOM pixel data in browser preserving HU values?
When you convert DICOM pixel data to other non-DICOM format like PNG, BMP, JPG, J2K etc., the data you are looking for is lost. You may further research for TIF format whether it preserves the data and it loads in browser. I guess it will not.
I will recommend to avoid this way. Instead, I will suggest using DICOM pixel data as-is in browser. This can be achieved by involving some java-script DICOM toolkit for browsers like cornerstone. You may also look for other toolkit if available and suits you.
Note that this involves learning curve. It will be too broad here to explain its working.

How to overwrite part of a png?

Given a png image and a set of data to write to it, is it possible to overwrite pixels in the existing png in a particular area of interest? For example, If I have a block of data in a rectangle between pixels (0,0) (5,10) would it be possible to write this data as a block into a 10X10 png without any concern for the area not being overwritten? My use case is that I have map tiles where half the data will be in one tile and half in the other, with the blank pixels being white squares. I would like to combine them by simply writing the non-white pixels directly to the existing png in a block without having to open, combine, then re-write the entire png. Does the structure of a png allow this?
I'm loath to claim that this is impossible, but it is certainly complicated.
First of all, pixels of a PNG are (sometimes) interlaced, so you'd have to calculate the locations of your target pixels based on the Adam7 scheme.
Furthermore each row is independently filtered, so you'd have to transform each row of your source using the filter of the target row. Depending on the filter you'd also have to adjust additional bytes on the border of the updated target bytes. Straight from the horse's mouth:
Though the concept is simple, there are quite a few subtleties in the actual mechanics of filtering.
Finally, all the filtered bytes are compressed using a generic compression algorithm called "deflate." Unless you want to decompress the whole thing beforehand, you need to make sure both that (1) your source data can be properly decoded and (2) the bytes near the border of the target bytes are properly compressed in the context of their new neighbors.
I'm not a compression expert, so I won't argue in more detail. One piece of good news is that the algorithm seems to preserve independence between distant regions due to its sliding window scheme: data are only compressed based on data in some preceding range, say 13,000 bytes.
If this seems at all easy to you, give it a try. If you're like me, though, you'll just decode the whole thing, overwrite the pixels as bitmap data, and encode the result.
This is practically impossible because the pixels data (after a row-by-row "filtering") is compressed with ZLIB. And it's practically impossible to change part of a compressed stream.

Parsing for RGB regions over thousands of high resolution image files

I have lots of high resolution image files that have regions of colors, basically blobs with different rgb values. I need to go through the images and for every image make a text file that contains the coordinates to one pixel in every blob. Because I have so many files the script needs to be fast. I already wrote some scala code to do the task except it only saves locations for one blob per specific RGB value, meaning if I have two blobs of the same color that are not connected it will only save one the location for the first one found. The solution to this is for each images copy the location and colors to a map and when I find a blob flood delete (flood fill except delete instead of fill) and then keep parsing on the new map. However, I think this will make run time horribly slow because I will have to go through the entire image to add it to a map before even starting the parse. Thoughts? Am I going about this all wrong?
Thanks.

Why does PNGing an image from a JPG make it 10x bigger?

I have an image captured via webcam of my cat (the subject might not be important). I've aquired it as a 31 kB JPG file. When I open it with an image editor, then save it (without alteration) as a PNG (max. compression) it stores as a 297 kB file.
Why is the PNG file 10x larger than the original JPG. As I understand it, opening a JPG is lossless, and saving a PNG is lossless. So, where does all the extra data come from? If the image comes entirely out of the small file, why does it then re-save to 10x the size on disc?
Please read this carefully. I'm not asking why the two formats produce different file sizes from an original image. I'm asking why opening an existing JPG then saving that exact same image as PNG is 10x bigger. I don't think that this is a duplicate question as far as I can ascertain.
Some tests I've done:-
I've looked at both JPG and PNG and they look identical.
I've zipped both files and got cat.jpg.zip as 31 kB, and cat.png.zip as 296 kB. I take this to mean that both files are fully compressed with virtually no latent redundancy.
I've tried this via the BMP format as well; cat.jpg (31 kB) -> cat.bmp (922 kB) -> cat.bmp.zip (404 kB).
Any ideas regarding the mysterious extra data..?
JPEG inherently produces better compression than PNG. However, JPEG trades off fidelity to the original image for better compression. PNG reproduces the original exactly.
If you go from JPEG to PNG, you are not going to see a changes.
If you go from PNG to JPEG, it is likely you ill see a lot of change.
JPEG uses a series of compression techniques. One of them, the DCT, transforms the image. This creates a subtle waviness in color. For example, if you start with a solid red block that is all one color, JPEG produces a lot of slight color variations.
PNG compression relies on finding repeated pixel patterns in scan lines. The subtle color variations introduced by JPEG can make PNG compression less effective.
The extra data you refer to is simply the difference in how the two format represent the same image.
If I take a JPEG image from a camera and convert to PNG, the result is usually about 10 times larger.
For a PNG Graphic image going to JPEG, I normally get files about 1/3 smaller.
JPG uses lossy compression, while PNG uses loseless compression. When you convert JPG to PNG, what actually happens is uncompressing from JPG and saving the results in PNG.
The "extra data" is actually due to different algorithms used.
As for why zipped files also have different size, that's because PNG has to save all pixels(including those JPG has lossy compressed) loselessy.

Why smaller PNG image takes up more space than the original after getting resized by GraphicsMagic

The original PNG image is 800x1200 and takes up about 34K. After the images is resized by GraphicsMagick to 320x480 size, the resulting images takes up approximately 37K. (For comparison, if the image is resized with Paint on Windows 7 then the resulting image is 40K.) What gives? The whole point of resizing an image was to save space. How should GraphicsMagick be used to shrink the image size?
PNG is a lossless format and compresses the image data by first performing a step called prediction and then applying the same algorithm used in zlib. The prediction step is a crucial one in order to effectively compress the file, and it is based on the values of earlier neighbors pixels.
So, suppose you have a large PNG in black & white (by that I really mean only black and white, some people confuse that by grayscale sometimes). Also suppose it is not a tiny checkerboard pattern. In many regions of this image, you will have a relatively large white region, and then a relatively large black region, and so on. When the predictor is inside one of these large regions, it has no trouble to correctly predict that the current pixel intensity is exactly equal to the last one. This makes it easier to better compress the data describing your image.
Now, let us downscale this black & white image using some resampling filter different than nearest neighbor (let's say Lanczos). This has a great chance to turn your black & white image into a grayscale one, which has a much greater intensity range. This potentially makes the job of the predictor much harder, and thus the final file size might be larger.
For instance here is a black & white 256x256 PNG image which takes 5440 bytes, a resizing of it (using 3-lobed Lanczos) to 120x120 which now takes 7658 bytes, and another resizing (using nearest neighbor) to 120x120 which occupies 2467 bytes.
PNG is a compressed format. Sometimes trying to compress a maximally compressed item actually results in a larger item. So if the 800x1200 is resized to a smaller size, but the result retains everything that was in the original, because the original is already as minimal as possible, you could see this happen. To demonstrate this, try using 7zip to compress some data with ultra compression. Then try compressing the compressed file. Often the second compressed file will be larger than the first.