This question already has answers here:
image compression by size - iPhone SDK
(7 answers)
Closed 8 years ago.
In obj-с how does one get the size of a certain UIImage stored in a custom NSMutableArray? That's the first thing I want to do. And the second is, knowing that the image is bigger in file size (say it's 15 MB) than my own file size limit (say it's 5 MB) how does one compress the image to be the closest to the file size limit, say 4.99 MB?
i have seen this on another question on stacj overflow link is -- image compression by size - iPhone SDK
but i does combine two answer from there , i also used this code to do compression.
CGFloat compression = 0.9f;
CGFloat maxCompression = 0.1f;
int maxFileSize = 250*1024;
NSData *imageData = UIImageJPEGRepresentation(yourImage, compression);
while ([imageData length] > maxFileSize && compression > maxCompression)
{
compression -= 0.1;
imageData = UIImageJPEGRepresentation(yourImage, compression);
}
One way to do it, is to re-compress the file in a loop, until you find the desired size. You could first find height and width, and guess the compression factor (larger image more compression) then after you compress it, check the size, and split the difference again.
I know this is not super efficient, but I do not believe there is a single call to achieve a image of a specific size.
Related
Python 3.6.6, Pillow 5.2.0
The Google Vision API has a size limit of 10485760 bytes.
When I'm working with a PIL Image, and save it to Bytes, it is hard to predict what the size will be. Sometimes when I try to resize it to have smaller height and width, the image size as bytes gets bigger.
I've tried experimenting with modes and formats, to understand their impact on size, but I'm not having much luck getting consistent results.
So I start out with a rawImage that is Bytes obtained from some user uploading an image (meaning I don't know much about what I'm working with yet).
rawImageSize = sys.getsizeof(rawImage)
if rawImageSize >= 10485760:
imageToShrink = Image.open(io.BytesIO(rawImage))
## do something to the image here to shrink it
# ... mystery code ...
## ideally, the minimum amount of shrinkage necessary to get it under 10485760
rawBuffer = io.BytesIO()
# possibly convert to RGB first
shrunkImage.save(rawBuffer, format='JPEG') # PNG files end up bigger after this resizing (!?)
rawImage = rawBuffer.getvalue()
print(sys.getsizeof(rawImage))
To shrink it I've tried getting a shrink ratio and then simply resizing it:
shrinkRatio = 10485760.0 / float(rawImageSize)
imageWidth, imageHeight = pilImage.size
shrunkImage = imageToShrink.resize((int(imageWidth * shrinkRatio),
int(imageHeight * shrinkRatio)), Image.LANCZOS)
Of course I could use a sufficiently small and somewhat arbitrary thumbnail size instead. I've thought about iterating thumbnail sizes until a combination takes me below the maximum bytes size threshold. I'm guessing the bytes size varies based on the color depth and mode and (?) I got from the end user that uploaded the original image. And that brings me to my questions:
Can I predict the size in bytes a PIL Image will be before I convert it for consumption by Google Vision? What is the best way to manage that size in bytes before I convert it?
First all, you probably don't need to maximize to the 10M limit posed by Google Vision API. In most case, a much smaller file will be just fine, and faster.
In addition to that, you may want to keep in mind that the aspect ratio might lead to different result. See this, https://www.mlreader.com/prepare-image-for-google-vision-api
I want to keep a UIImage the same viewable size but reduce it's file size. Is there a way to do this?
For example, if the user is saving 10 images taken with the camera, i'd like them to come to a smaller file size while keeping most of the quality and the same width and height of the original image.
save as jpeg with higher compression
(will reduce quality)
NSData * UIImageJPEGRepresentation (
UIImage *image,
CGFloat compressionQuality
);
When sharing an image in an iPhone, we are given an opportunity to pick different sizes of an image - Small, Medium, Large, Original. And an opportunity of see the size in KB/MB along with each classification.
Where does apple expose this code for users to leverage? I couldn't spot it in my searches in iOS docs.
What are some tested & tried frameworks/methods available on GitHub or elsewhere that can emulate this behavior?
EDITED:
I evaluated the solution posted by Brad Larson:
UIImage: Resize, then Crop
By confirming a decrease in size via the size measuring solution given below.
Together they are a good fit.
Where does apple expose this code for users to leverage? I couldn't spot it in my searches in iOS docs.
iOS is not open-source project. Apple releases only sample code which is used like tutorial sample project.
Here's an article regarding calcutation of image file size.
This is very simple, though. Here's the code:
UIImage *image = [UIImage imageNamed:#"your_big_image.png"];
size_t depth = CGImageGetBitsPerPixel(image.CGImage);
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
double bytes = ((double)width * (double)height * (double)depth) / 8.0;
Now you have your image size in bytes.
To convert it to kB divide it by 1024. If you want to get MB, divide by 1048576 (1024x1024).
double kb = bytes / 1024.0f;
double mb = bytes / 1048576.0f
Then you can display it to your user by formatting your message as following:
NSString *msg = [NSString stringWithFormat:#"Original %dx%d (%.2f MB)", (int)width,(int)height, (float)mb];
From times to times I have to know the width and height of images. I am using the following code:
UIImage *imageU = [UIImage imageWithContentsOfFile:[[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"myImage.png"]];
CGFloat imageW = CGImageGetWidth(imageU.CGImage);
CGFloat imageH = CGImageGetHeight(imageU.CGImage);
My question is that if it is there any other way to know the width and height of an image, in pixels, without having to load the image on a variable, that's probably consuming memory. Can the dimensions be read from the file directly without loading the whole image?
This has to work for PNG and JPEG.
thanks.
You can parse the PNG file which contain the size info in the header. See Get size of image without loading in to memory.
(BTW, you can use imageU.size to get a CGSize of the image.)
I don't think you can do this "for free", speaking in terms of memory use. You could create an NSInputStream and read/parse the data of the IHDR section (follow #KennyTM's link), then discard everything.
I am using following code to resize the image.
Resize a UIImage Right Way
And I use interpolation quality as kCGInterpolationLow.
And then I use UIImageJPEGRepresentation(image,0.0) to get the NSData of that image.
Still its a little bit high in size around 100kb. when I send it over the network. Can I reduce it further. If I am to reduce it more what could I do ?
Thanks and Kind Regards,
you image compress and image data stroed NSData format. The function is
UIImageJPEGRepresentation(UIImage * _Nonnull image, CGFloat compressionQuality);
Example:
NSData *objImgData = UIImageJPEGRepresentation(objImg,1.0);