I am using flutter_image_compress to compress images. I wrote a method that takes a fileSize and tries to reduce the image down to that fileSize with best quality. Here is the method:
Future<File?> compressImage(File image, {int kb = 50}) async {
var inPath = image.absolute.path;
int q = 95;
File? result = image;
while (q >= 1 && result != null && (await result.length() > kb * 1024)) {
result = await FlutterImageCompress.compressAndGetFile(
inPath,
outPath,
quality: q,
minHeight: 1500,
minWidth: 1500,
);
q = q ~/ 2;
}
return result;
}
So basically this method checks whether the image is already smaller than the given size, and if not, compresses the image down. For every run, the quality gets reduces by 50%, until it is 1.
This method works, but I get very different image qualities for the same sizes on my iPhone and Android Phone.
on my iPhone (iPhone 11, iOS 15.5), I can compress the image down to 50kb and the quality is perfect, I don't see any difference. On my Android Phone (Samsung Galaxy A13, Android 12), if I compress the image down to 50kb, the quality is pretty bad.
Android Example:
initial length: 1213154
compress quality: 47
compressed length: 89085
iOS Example:
initial length: 1409313
compress quality: 2
compressed length: 76801
So, in the iOS example, I set had to use a quality of 2 to get it down to about 76kb and the result is perfect. On Android, I used a quality of 47 to get it down to 89kb, and the quality is pretty bad.
Is there a method how I can reduce the size as much as possible without really loosing quality of the image? Are there any other factors I missed? This does not really make sense to me, because the image on iOS is way better even though it is compressed to a way lower file size.
The package seems to provide native code written in Kotlin (for Android) and Swift (for iOS). So there may be differences between those two implementations, or even a bug in the Android code leading to worse quality.
You could file an issue on their github page.
Related
Hello i am using image_picker plugin on flutter and with it i can change the quality of an image from 100 percent to 1. with image quality property.
Some of my uploaded images are 500 kb to 10 mb so some needs compressing and some dont.
So is it possible to check the 100% quality version and if it is less size than 2 mb it stays 100% and we upload the image.
if it is bigger than 2mb drop the quality to 75%.
check the size < 2 mb upload it or >2mb drop to 50 quality.
try it on 25% and 10% and 1% if its still not less than 2mb on 1 percent quality just dont upload it.
there must be a way to do this like even on instagram they try to save the photos on full size but i dont think so that they will still save it on full size if it is bigger than 100mb or something like that right?
and this plugin opens a weird last 40 images page? on the pick file screen why doesnt it open the default gallery app although i use the source: ImageSource.gallery property ?
First you can get the image size by following code:
final bytes = image.readAsBytesSync().lengthInBytes;
final kb = bytes / 1024;
final mb = kb / 1024;
Then you can implement switch cases of if-else statements to implement image compressing functionality.
Also there's a package called flutter_image_compress which can compress your image after you have selected it.
Also they have provided various examples for multiple file formats
ImagePicker()
.pickImage(
source: useCamera ? ImageSource.camera : ImageSource.gallery,
imageQuality: 60)
Hope it will help.
I have a device that produces 24-bit BMP images and makes them available in memory as a UnsafePointer<CChar>. I need to save them as JPEG for which I'm relying on Core Image, however it doesn't support that pixel format, so I'm stupidly adding an extra byte to each pixel to make them 32-bit as follows:
// Here imageBytesPoint is the UnsafePointer<CChar> and imageSize is the width * height
let bytesPerPixel = 3
var imageBytesWithAlpha = [CChar]()
for i in stride(from: 0, to: imageSize, by: bytesPerPixel) {
imageBytesWithAlpha.append(imageBytesPointer[i])
imageBytesWithAlpha.append(imageBytesPointer[i + 1])
imageBytesWithAlpha.append(imageBytesPointer[i + 2])
imageBytesWithAlpha.append(0)
}
This is fast enough on some devices, but stupidly slow (3 seconds) on others. Is there away to optimize the loop to make it fast? As in half a second fast? I tried to port this, but the embarrassing result is even slower than my naive solution above.
(If you wonder why I'm trying to make the conversion myself, I attempted to use Core Video, but it leaks and I still cannot figure out why)
Setup
I am using a custom RenderBox to draw.
The canvas object in the code below comes from the PaintingContext in the paint method.
Drawing
I am trying to render pixels individually by using Canvas.drawRect.
I should point out that these are sometimes larger and sometimes smaller than the pixels on screen they actually occupy.
for (int i = 0; i < width * height; i++) {
// in this case the rect size is 1
canvas.drawRect(
Rect.fromLTWH(index % (width * height),
(index / (width * height)).floor(), 1, 1), Paint()..color = colors[i]);
}
Storage
I am storing the pixels as a List<List<Color>> (colors in the code above). I tried differently nested lists previously, but they did not cause any noticable discrepancies in terms of performance.
The memory on my Android Emulator test device increases by 282.7MB when populating the list with a 999x999 image. Note that it only temporarily increases by 282.7MB. After about half a minute, the increase drops to 153.6MB and stays there (without any user interaction).
Rendering
With a resolution of 999x999, the code above causes a GPU max of 250.1 ms/frame and a UI max of 1835.9 ms/frame, which is obviously unacceptable. The UI freezes for two seconds when trying to draw a 999x999 image, which should be a piece of cake (I would guess) considering that 4k video runs smoothly on the same device.
CPU
I am not exactly sure how to track this properly using the Android profiler, but while populating or changing the list, i.e. drawing the pixels (which is the case for the above metrics as well), CPU usage goes from 0% to up to 60%. Here are the AVD performance settings:
Cause
I have no idea where to start since I am not even sure what part of my code causes the freezing. Is it the memory usage? Or the drawing itself?
How would I go about this in general? What am I doing wrong? How should I store these pixels instead.
Efforts
I have tried so much that did not help at all that I will try to only point out the most notable ones:
I tried converting the List<List<Color>> to an Image from the dart:ui library hoping to use Canvas.drawImage. In order to do that, I tried encoding my own PNG, but I have not been able to render more than a single row. However, it did not look like that would boost performance. When trying to convert a 9999x9999 image, I ran into an out of memory exception. Now, I am wondering how video is rendered as all as any 4k video will easily take up more memory than a 9999x9999 image if a few seconds of it are in memory.
I tried implementing the image package. However, I stopped before completing it as I noticed that it is not meant to be used in Flutter but rather in HTML. I would not have gained anything using that.
This one is pretty important for the following conclusion I will draw: I tried to just draw without storing the pixels, i.e. is using Random.nextInt to generate random colors. When trying to randomly generate a 999x999 image, this resulted in a GPU max of 1824.7 ms/frames and a UI max of 2362.7 ms/frame, which is even worse, especially in the GPU department.
Conclusion
This is the conclusion I reached before trying my failed attempt at rendering using Canvas.drawImage: Canvas.drawRect is not made for this task as it cannot even draw simple images.
How do you do this in Flutter?
Notes
This is basically what I tried to ask over two months ago (yes, I have been trying to resolve this issue for that long), but I think that I did not express myself properly back then and that I knew even less what the actual problem was.
The highest resolution I can properly render is around 10k pixels. I need at least 1m.
I am thinking that abandoning Flutter and going for native might be my only option. However, I would like to believe that I am just approaching this problem completely wrong. I have spent about three months trying to figure this out and I did not find anything that lead me anywhere.
Solution
dart:ui has a function that converts pixels to an Image easily: decodeImageFromPixels
Example implementation
Issue on performance
Does not work in the current master channel
I was simply not aware of this back when I created this answer, which is why I wrote the "Alternative" section.
Alternative
Thanks to #pslink for reminding me of BMP after I wrote that I had failed to encode my own PNG.
I had looked into it previously, but I thought that it looked to complicated without sufficient documentation. Now, I found this nice article explaining the necessary BMP headers and implemented 32-bit BGRA (ARGB but BGRA is the order of the default mask) by copying Example 2 from the "BMP file format" Wikipedia article. I went through all sources but could not find an original source for this example. Maybe the authors of the Wikipedia article wrote it themselves.
Results
Using Canvas.drawImage and my 999x999 pixels converted to an image from a BMP byte list, I get a GPU max of 9.9 ms/frame and a UI max of 7.1 ms/frame, which is awesome!
| ms/frame | Before (Canvas.drawRect) | After (Canvas.drawImage) |
|-----------|---------------------------|--------------------------|
| GPU max | 1824.7 | 9.9 |
| UI max | 2362.7 | 7.1 |
Conclusion
Canvas operations like Canvas.drawRect are not meant to be used like that.
Instructions
First of, this is quite straight-forward, however, you need to correctly populate the byte list, otherwise, you are going to get an error that your data is not correctly formatted and see no results, which can be quite frustrating.
You will need to prepare your image before drawing as you cannot use async operations in the paint call.
In code, you need to use a Codec to transform your list of bytes into an image.
final list = [
0x42, 0x4d, // 'B', 'M'
...];
// make sure that you either know the file size, data size and data offset beforehand
// or that you edit these bytes afterwards
final Uint8List bytes = Uint8List.fromList(list);
final Codec codec = await instantiateImageCodec(bytes));
final Image image = (await codec.getNextFrame()).image;
You need to pass this image to your drawing widget, e.g. using a FutureBuilder.
Now, you can just use Canvas.drawImage in your draw call.
Python 3.6.6, Pillow 5.2.0
The Google Vision API has a size limit of 10485760 bytes.
When I'm working with a PIL Image, and save it to Bytes, it is hard to predict what the size will be. Sometimes when I try to resize it to have smaller height and width, the image size as bytes gets bigger.
I've tried experimenting with modes and formats, to understand their impact on size, but I'm not having much luck getting consistent results.
So I start out with a rawImage that is Bytes obtained from some user uploading an image (meaning I don't know much about what I'm working with yet).
rawImageSize = sys.getsizeof(rawImage)
if rawImageSize >= 10485760:
imageToShrink = Image.open(io.BytesIO(rawImage))
## do something to the image here to shrink it
# ... mystery code ...
## ideally, the minimum amount of shrinkage necessary to get it under 10485760
rawBuffer = io.BytesIO()
# possibly convert to RGB first
shrunkImage.save(rawBuffer, format='JPEG') # PNG files end up bigger after this resizing (!?)
rawImage = rawBuffer.getvalue()
print(sys.getsizeof(rawImage))
To shrink it I've tried getting a shrink ratio and then simply resizing it:
shrinkRatio = 10485760.0 / float(rawImageSize)
imageWidth, imageHeight = pilImage.size
shrunkImage = imageToShrink.resize((int(imageWidth * shrinkRatio),
int(imageHeight * shrinkRatio)), Image.LANCZOS)
Of course I could use a sufficiently small and somewhat arbitrary thumbnail size instead. I've thought about iterating thumbnail sizes until a combination takes me below the maximum bytes size threshold. I'm guessing the bytes size varies based on the color depth and mode and (?) I got from the end user that uploaded the original image. And that brings me to my questions:
Can I predict the size in bytes a PIL Image will be before I convert it for consumption by Google Vision? What is the best way to manage that size in bytes before I convert it?
First all, you probably don't need to maximize to the 10M limit posed by Google Vision API. In most case, a much smaller file will be just fine, and faster.
In addition to that, you may want to keep in mind that the aspect ratio might lead to different result. See this, https://www.mlreader.com/prepare-image-for-google-vision-api
I am working with images of size 2 to 4MB. I am working with images of resolution 1200x1600 by performing scaling, translation and rotation operations. I want to add another image on that and save it to photo album. My app is crashing after i successfully edit one image and save to photos. Its happening because of images size i think. I want to maintain the 90% of resolution of the images.
I am releasing some images when i get memory warning. But still it crashes as i am working with 2 images of size 3MB each and context of size 1200x1600 and getting a image from the context at the same time.
Is there any way to compress images and work with it?
I doubt it. Even compressing and decompressing an image without doing anything to it loses information. I suspect that any algorithms to manipulate compressed images would be hopelessly lossy.
Having said that, it may be technically possible. For instance, rotating a Fourier transform also rotates the original image. But practical image compression isn't usually as simple as just computing a Fourier transform.
Alternatively, you could write piecemeal algorithms that chop the image up into bite-sized pieces, transform the pieces and reassemble them afterwards. You might also provide a real-time view of the process by applying the same transform to a smaller version of the full image.
The key will be never to full decode the entire image into memory at full size.
If you need to display the image, there's no reason to do that at full size -- the display on the iPhone is too small to take advantage of that. For image objects that are for display, decode the image in scaled down form.
For processing, you will need to write custom code that works on a stream of pixels rather than an in-memory array. I don't know if this is available on the iPhone already, but you can write it yourself by writing to the libpng library API directly.
For example, your code right now probably looks something like this (pseudo code)
img = ReadImageFromFile("image.png")
img2 = RotateImage(img, 90)
SaveImage(img2, "image2.png")
The key thing to understand, is that in this case, img is not the data in the PNG file (2MB), but the fully uncompressed image (~6mb). RotateImage (or whatever it's called) returns another image of about this same size. If you are scaling up, it's even worse.
You want code that looks more like this (but there might not be any API's for you to do it -- you might have to write it yourself)
imgPixelGetter = PixelDecoderFromFile("image.png")
imgPixelSaver = OpenImageForAppending("image2.png")
w = imgPixelGetter.Width
h = imgPixelGetter.Height
// set up a 90 degree rotate
imgPixelSaver.Width = h
imgPixelSaver.Height = w
// read each vertical scanline of pixels
for (x = 0; x < w; ++x) {
pixelRect = imgPixelGetter.ReadRect(x, 0, 1, h) // x, y, w, h
pixelRect.Rotate(90); // it's now got a width of h and a height of 1
imgPixelSaver.AppendScanLine(pixelRect)
}
In this algorithm, you never had the entire image in memory at once -- you read it out piece by piece and saved it. You can write similar algorithms for scaling and cropping.
The tradeoff is that it will be slower than just decoding it into memory -- it depends on the image format and the code that's doing the ReadRect(). Unfortunately, PNG is not designed for this kind of access to the pixels.