Is it possible to convert CameraImage (yuv420 or bgra) to jpeg and after that encode it in base64?
List<String> convertFaceImages(Map<String, CameraImage> images) {
final sortedImages = sortImages(images);
List<String> base64Images = [];
List<img.Image> listImage = [];
for (CameraImage image in sortedImages) {
var yuv420Image =
img.Image(image.width, image.height); // Create Image buffer
Plane plane = image.planes[0];
const int shift = (0xFF << 24);
// Fill image buffer with plane[0] from YUV420_888
for (int x = 0; x < image.width; x++) {
for (int planeOffset = 0;
planeOffset < image.height * image.width;
planeOffset += image.width) {
final pixelColor = plane.bytes[planeOffset + x];
// color: 0x FF FF FF FF
// A B G R
// Calculate pixel color
var newVal =
shift | (pixelColor << 16) | (pixelColor << 8) | pixelColor;
yuv420Image.data[planeOffset + x] = newVal;
}
}
listImage.add(yuv420Image);
}
for (img.Image image in listImage) {
final encodedImage = base64Encode(img.encodeJpg(image));
logger.w(encodedImage);
base64Images.add(encodedImage);
}
return base64Images;
}
I tried do it with this code, but when I try to decode this image on backend (or https://base64.guru/converter/decode/image) I got just a black image. More of than, when I was triyng to display this images in my app, images was black-white an rotated on 90 degrees.
Can someone help me please? I'm actually trying to get this right for a very long time.
Related
I want to change black background of my image to transparent, I receive an image base64 encoded format. I can change color to other colors, but not working with alpha.
This is my code Example.
List<int> switchColor(Uint8List bytes, ) {
final image = External.decodeImage(bytes);
final pixels = image!.getBytes(format: External.Format.rgba);
final int length = pixels.lengthInBytes;
for (var i = 0; i < length; i += 4) {
if (pixels[ i + 1] == 0 &&pixels[ i ] == 0 &&pixels[ i + 2] == 0) {
pixels[i + 3] = 0;
}
}
return External.encodePng(image);
}
const shift = (0xFF << 24);
Future<Image> convertYUV420toImageColor(CameraImage image) async {
try {
final int width = image.width;
final int height = image.height;
final int uvRowStride = image.planes[1].bytesPerRow;
final int uvPixelStride = image.planes[1].bytesPerPixel;
print("uvRowStride: " + uvRowStride.toString());
print("uvPixelStride: " + uvPixelStride.toString());
// imgLib -> Image package from https://pub.dartlang.org/packages/image
var img = imglib.Image(width, height); // Create Image buffer
// Fill image buffer with plane[0] from YUV420_888
for(int x=0; x < width; x++) {
for(int y=0; y < height; y++) {
final int uvIndex = uvPixelStride * (x/2).floor() + uvRowStride*(y/2).floor();
final int index = y * width + x;
final yp = image.planes[0].bytes[index];
final up = image.planes[1].bytes[uvIndex];
final vp = image.planes[2].bytes[uvIndex];
// Calculate pixel color
int r = (yp + vp * 1436 / 1024 - 179).round().clamp(0, 255);
int g = (yp - up * 46549 / 131072 + 44 -vp * 93604 / 131072 + 91).round().clamp(0, 255);
int b = (yp + up * 1814 / 1024 - 227).round().clamp(0, 255);
// color: 0x FF FF FF FF
// A B G R
img.data[index] = shift | (b << 16) | (g << 8) | r;
}
}
imglib.PngEncoder pngEncoder = new imglib.PngEncoder(level: 0, filter: 0);
List<int> png = pngEncoder.encodeImage(img);
muteYUVProcessing = false;
return Image.memory(png);
} catch (e) {
print(">>>>>>>>>>>> ERROR:" + e.toString());
}
return null;
}
I have been following this code snippet from How to convert Camera Image to Image in Flutter? to convert YUV to RGB to send the images via WebSockets for ML prediction.
Although it works to convert, the resulting image is rotated 90 degrees and the performance is a little bit slow. How I can rotate it?
replace img.data[index] = shift | (b << 16) | (g << 8) | r;
with
if (img.boundsSafe(height-y, x)){
img.setPixelRgba(height-y, x, r , g ,b ,shift);
}
and replace var img = imglib.Image(width, height);
with
var img = imglib.Image(height, width);
For IOS version, the CameraImage is returned as biplanar which has only two planes.
Quote from image_format_group.dart:
/// Multi-plane YUV 420 format.
/// This format is a generic YCbCr format, capable of describing any 4:2:0
/// chroma-subsampled planar or semiplanar buffer (but not fully interleaved),
/// with 8 bits per color sample.
/// On Android, this is `android.graphics.ImageFormat.YUV_420_888`. See
/// https://developer.android.com/reference/android/graphics/ImageFormat.html#YUV_420_888
/// On iOS, this is `kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange`. See
/// https://developer.apple.com/documentation/corevideo/1563591-pixel_format_identifiers/kcvpixelformattype_420ypcbcr8biplanarvideorange?language=objc
yuv420
For my approach, I use dart:ffi and link to c++ for Android CameraImage following the tutorial from here. The conversion between YUV420p and YUV420sp can be found here. There is no complete code for the conversion yet, but neither any solution for IOS around the forum.
Currently I have a Uint8List, formatted like [R,G,B,R,G,B,...] for all the pixels of the image. And of course I have its width and height.
I found decodeImageFromPixels while searching but it only takes RGBA/BGRA format. I converted my pixmap from RGB to RGBA and this function works fine.
However, my code now looks like this:
Uint8List rawPixel = raw.value.asTypedList(w * h * channel);
List<int> rgba = [];
for (int i = 0; i < rawPixel.length; i++) {
rgba.add(rawPixel[i]);
if ((i + 1) % 3 == 0) {
rgba.add(0);
}
}
Uint8List rgbaList = Uint8List.fromList(rgba);
Completer<Image> c = Completer<Image>();
decodeImageFromPixels(rgbaList, w, h, PixelFormat.rgba8888, (Image img) {
c.complete(img);
});
I have to make a new list(waste in space) and iterate through the entire list(waste in time).
This is too inefficient in my opinion, is there any way to make this more elegant? Like add a new PixelFormat.rgb888?
Thanks in advance.
You may find that this loop is faster as it doesn't keep appending to the list and then copy it at the end.
final rawPixel = raw.value.asTypedList(w * h * channel);
final rgbaList = Uint8List(w * h * 4); // create the Uint8List directly as we know the width and height
for (var i = 0; i < w * h; i++) {
final rgbOffset = i * 3;
final rgbaOffset = i * 4;
rgbaList[rgbaOffset] = rawPixel[rgbOffset]; // red
rgbaList[rgbaOffset + 1] = rawPixel[rgbOffset + 1]; // green
rgbaList[rgbaOffset + 2] = rawPixel[rgbOffset + 2]; // blue
rgbaList[rgbaOffset + 3] = 255; // a
}
An alternative is to prepend the array with a BMP header by adapting this answer (though it would simpler as there would be no palette) and passing that bitmap to instantiateImageCodec as that code is presumably highly optimized for parsing bitmaps.
That function is too slow. So Flutter CameraImage efficiency convert to TensorImage in dart?
var img = imglib.Image(image.width, image.height); // Create Image buffer
Plane plane = image.planes[0];
const int shift = (0xFF << 24);
// Fill image buffer with plane[0] from YUV420_888
for (int x = 0; x < image.width; x++) {
for (int planeOffset = 0;
planeOffset < image.height * image.width;
planeOffset += image.width) {
final pixelColor = plane.bytes[planeOffset + x];
// color: 0x FF FF FF FF
// A B G R
// Calculate pixel color
var newVal =
shift | (pixelColor << 16) | (pixelColor << 8) | pixelColor;
img.data[planeOffset + x] = newVal;
}
}
return img;
}```
Seems your for loop is inefficient. The data for whole row (with same placeOffset, different x) will be cached at once, so would be faster to switch ordering of the two loops.
for (int y = 0; y < image.height; y++) {
for (int x = 0; x < image.width; x++) {
final pixelColor = plane.bytes[y * image.width + x];
// ...
}
}
However, your code does not seems to be reading from the actual camera stream. please refer this thread for converting CameraImage to Image.
How to convert Camera Image to Image in Flutter?
I want to convert camera image from function startImageStream of camera plugin in Flutter to Image to crop that image but I only find the way to convert to FirebaseVisionImage.
Edit For Color Image
if I understand you clear. You are trying to covnert YUV420 format. The following is code snippet from: https://github.com/flutter/flutter/issues/26348
const shift = (0xFF << 24);
Future<Image> convertYUV420toImageColor(CameraImage image) async {
try {
final int width = image.width;
final int height = image.height;
final int uvRowStride = image.planes[1].bytesPerRow;
final int uvPixelStride = image.planes[1].bytesPerPixel;
print("uvRowStride: " + uvRowStride.toString());
print("uvPixelStride: " + uvPixelStride.toString());
// imgLib -> Image package from https://pub.dartlang.org/packages/image
var img = imglib.Image(width, height); // Create Image buffer
// Fill image buffer with plane[0] from YUV420_888
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
final int uvIndex = uvPixelStride * (x / 2).floor() + uvRowStride * (y / 2).floor();
final int index = y * width + x;
final yp = image.planes[0].bytes[index];
final up = image.planes[1].bytes[uvIndex];
final vp = image.planes[2].bytes[uvIndex];
// Calculate pixel color
int r = (yp + vp * 1436 / 1024 - 179).round().clamp(0, 255);
int g = (yp - up * 46549 / 131072 + 44 - vp * 93604 / 131072 + 91).round().clamp(0, 255);
int b = (yp + up * 1814 / 1024 - 227).round().clamp(0, 255);
// color: 0x FF FF FF FF
// A B G R
img.data[index] = shift | (b << 16) | (g << 8) | r;
}
}
imglib.PngEncoder pngEncoder = new imglib.PngEncoder(level: 0, filter: 0);
List<int> png = pngEncoder.encodeImage(img);
muteYUVProcessing = false;
return Image.memory(png);
} catch (e) {
print(">>>>>>>>>>>> ERROR:" + e.toString());
}
return null;
}
I found, sometimes planes[0] bytes per row is not same as width. In that case, you should do something like the code below.
static image_lib.Image convertYUV420ToImage(CameraImage cameraImage) {
final width = cameraImage.width;
final height = cameraImage.height;
final yRowStride = cameraImage.planes[0].bytesPerRow;
final uvRowStride = cameraImage.planes[1].bytesPerRow;
final uvPixelStride = cameraImage.planes[1].bytesPerPixel!;
final image = image_lib.Image(width, height);
for (var w = 0; w < width; w++) {
for (var h = 0; h < height; h++) {
final uvIndex =
uvPixelStride * (w / 2).floor() + uvRowStride * (h / 2).floor();
final index = h * width + w;
final yIndex = h * yRowStride + w;
final y = cameraImage.planes[0].bytes[yIndex];
final u = cameraImage.planes[1].bytes[uvIndex];
final v = cameraImage.planes[2].bytes[uvIndex];
image.data[index] = yuv2rgb(y, u, v);
}
}
return image;
}
static int yuv2rgb(int y, int u, int v) {
// Convert yuv pixel to rgb
var r = (y + v * 1436 / 1024 - 179).round();
var g = (y - u * 46549 / 131072 + 44 - v * 93604 / 131072 + 91).round();
var b = (y + u * 1814 / 1024 - 227).round();
// Clipping RGB values to be inside boundaries [ 0 , 255 ]
r = r.clamp(0, 255);
g = g.clamp(0, 255);
b = b.clamp(0, 255);
return 0xff000000 |
((b << 16) & 0xff0000) |
((g << 8) & 0xff00) |
(r & 0xff);
}