I Have This Questions :
1-Read the image (‘lena_gray.jpg')
2-Write the image to the disk for quality=[0 5 15 25 50];
3-Find the compression ratio for each value of quality
*I solved the first and second question
*But when I wanted to solve the last question, I did,
a)I calculated the original image size
"OriginalFileSize"
b)I calculated the size of the compressed image "CompressedFileSize" for each Qulaity But when I
calculate the "compressionRatio=OriginalFileSize/CompressedFileSize" for each Quality, I get the same
result even though the quality is different
*What is correct and how can I solve the question ?
*Is there another way to solve the 3 questions?
figure
Original=imread('lena (1).jpg')
imshow(Original)
information = imfinfo('lena (1).jpg')
OriginalFileSize=(information.Width*information.Height*information.BitDepth)/8
//for Quality 1
imwrite(Original,'CompressedQuality1.jpg','jpg','Quality',0)
CompressedFileSize1=information.FileSize
CompressedRatio1=OriginalFileSize/CompressedFileSize1
//for Quality 5
imwrite(Original,'CompressedQuality2.jpg','jpg','Quality',5)
CompressedFileSize2=information.FileSize
CompressedRatio2=OriginalFileSize/CompressedFileSize2
//for Quality 15
imwrite(Original,'CompressedQuality3.jpg','jpg','Quality',15)
CompressedFileSize3=information.FileSize
CompressedRatio3=OriginalFileSize/CompressedFileSize3
//for Qualoty 25
imwrite(Original,'CompressedQuality4.jpg','jpg','Quality',25)
CompressedFileSize4=information.FileSize
CompressedRatio4=OriginalFileSize/CompressedFileSize4
//for Quality 50
imwrite(Original,'CompressedQuality5','jpg','Quality',50)
CompressedFileSize5=information.FileSize
CompressedRatio5=OriginalFileSize/CompressedFileSize5
Related
I am doing a project for uni where i am detecting an object with U-net and then calculating the width of the object. I trained my U-net on images of size 300x300. Now i got to a point where i want to improve the accuracy of the width measurement, and for that reason i want to input images of larger size(600x600 lets say) into the model. Does this difference in size(training on 300x300, and using on 600x600) impact the overall segmentation quality?
I'm guessing it does but am not sure.
This question already has an answer here:
Convert an image to a 16bit color image
(1 answer)
Closed 4 years ago.
I'm trying to obtain a PNG image with a resolution 512px x 512px smaller than 100 kB.
At the moment the files are around 350 kB. I'm trying to reduce the file size and a way I was thinking is to reduce the Color-depth.
This is the information of the PNG images:
bits per component -> 8
bits per pixel -> 32
I wrote some code to create the CGContext with a different bits per Pixel, but I don't think that's the write way.
I don't want to use the UIImage.jpegData(compressionQuality: CGFloat) since I need to maintain the alpha channel.
I already found some code in Objective-C but that didn't help. I'm looking for a solution in Swift
You would need to decimate the original image somehow, e.g. zeroing some number of the least significant bits or reducing the resolution. However that completely defeats the purpose of using PNG in the first place, which is intended for lossless image compression.
If you want lossy image compression, where decimation followed by PNG is one approach, then you should instead use JPEG, which makes much more efficient use of the bits to reproduce a psycho-visually highly similar image. More efficient than anything you or I might come up with as a lossy pre-processing step to PNG.
Python 3.6.6, Pillow 5.2.0
The Google Vision API has a size limit of 10485760 bytes.
When I'm working with a PIL Image, and save it to Bytes, it is hard to predict what the size will be. Sometimes when I try to resize it to have smaller height and width, the image size as bytes gets bigger.
I've tried experimenting with modes and formats, to understand their impact on size, but I'm not having much luck getting consistent results.
So I start out with a rawImage that is Bytes obtained from some user uploading an image (meaning I don't know much about what I'm working with yet).
rawImageSize = sys.getsizeof(rawImage)
if rawImageSize >= 10485760:
imageToShrink = Image.open(io.BytesIO(rawImage))
## do something to the image here to shrink it
# ... mystery code ...
## ideally, the minimum amount of shrinkage necessary to get it under 10485760
rawBuffer = io.BytesIO()
# possibly convert to RGB first
shrunkImage.save(rawBuffer, format='JPEG') # PNG files end up bigger after this resizing (!?)
rawImage = rawBuffer.getvalue()
print(sys.getsizeof(rawImage))
To shrink it I've tried getting a shrink ratio and then simply resizing it:
shrinkRatio = 10485760.0 / float(rawImageSize)
imageWidth, imageHeight = pilImage.size
shrunkImage = imageToShrink.resize((int(imageWidth * shrinkRatio),
int(imageHeight * shrinkRatio)), Image.LANCZOS)
Of course I could use a sufficiently small and somewhat arbitrary thumbnail size instead. I've thought about iterating thumbnail sizes until a combination takes me below the maximum bytes size threshold. I'm guessing the bytes size varies based on the color depth and mode and (?) I got from the end user that uploaded the original image. And that brings me to my questions:
Can I predict the size in bytes a PIL Image will be before I convert it for consumption by Google Vision? What is the best way to manage that size in bytes before I convert it?
First all, you probably don't need to maximize to the 10M limit posed by Google Vision API. In most case, a much smaller file will be just fine, and faster.
In addition to that, you may want to keep in mind that the aspect ratio might lead to different result. See this, https://www.mlreader.com/prepare-image-for-google-vision-api
The documentation of Tesseract says:
Make sure there are a minimum number of samples of each character. 10
is good, but 5 is OK for rare characters.
There should be more samples of the more frequent characters - at
least 20.
I assume the last sentence means: at least 20 samples of more frequent characters would be OK. But what will be a good frequency?
Also:
Tesseract works best on images which have a DPI of at least 300 dpi,
so it may be beneficial to resize images. For more information see the
FAQ.
Why does Tesseract work best on 300 DPI? Isn't DPI just an setting telling on what scale an image is being printed? Why DPI and not just a minimum height in pixels?
Also, what would be a good height of an character in pixels?
Here is the problem to change an image resolution from 300 dpi to 200 dpi or 600 dpi.
i am using matlab but how can i be. so for i have use the imresize function which down sample the image and up sample it.
imresize(image, scale ,interpolation).
how can down sample the image to reduce its quality too. so that i may check the difference between the original image and the down sampled image.
j= imresize(I,0.2,'nearest');
where I is the original image and j is the down sampled image. is this changing the dpi of an image.
Dots per inch (DPI) has nothing to do with the type of resizing done by imresize. In fact, changing the DPI does not even require changing the actual image data, just the metadata -- a property or label. DPI gives you the information needed to go from pixels -> inches (print size).