What color format should use to encode i420 frame converted by libyuv in mediacodec? - android-camera

I converted nv21 frame to i420 frame with libyuv, and then encode the frame into a video file with mediacodec, the video looks strange. I use COLOR_FormatYUV420SemiPlanar and COLOR_FormatYUV420Flexible color format.
So, what color format should i use to encode i420 frame converted by libyuv.

Related

Flutter: Efficient Image filtering

So I am using Dart's image package to manipulate an JPG image which was loaded and decoded.
Filtering the image works perfectly fine, however, display the image seems to be somewhat slow.
The idea was to use Flutter's Image widget in combination with a MemoryImage a la:
Image.memory(bytes)
bytes must be a binary representation of the image, e.g., PNG, JPG etc. and is not compatible with the image library's internal uint32 storage format or a simple bitmap. As a result, one is required to encode the bitmap back to a JPG which is pretty slow.
Is there a more efficient way of doing this?

With ffmpeg's example extract_mvs.c, I am getting black and white images from color rtsp camera

I am using the extract_mvs.c from ffmpeg:
https://ffmpeg.org/doxygen/2.5/extract__mvs_8c_source.html
I added opencv to imwrite the image.
cv::Mat img(frame->height,frame->width,CV_8UC1,frame->data[0]);
imwrite( "pic.jpg", img );
That works because the image in the frame is in grayscale. The camera is a color camera however, and I dont know why I am getting grayscale. If I cange the above to CV_8UC3, I get segmentation fault.
I tried to save the image with ppm_save function and I still get a black and white frame when there should be a color frame. Any ideas?
Thanks,
Chris
Just read about graphics file formats and such. JPG requires BGR24 format. The raw frame buffer format YUV420P needs to be converted to BGR24 using swscale. Then the output frames height and width needs to be manually set before calling :
cv::Mat img(out_frame->height,out_frame->width,CV_8UC3,out_frame->data[0]);
imwrite( "pic.jpg", img );
Likewise,PPM file format requires RGB24 and the raw format needs to be converted to this before saving the ppm file.
Thanks,
Chris

How to Split frame into two images (even field and odd field) from Interlaced (NV12 format )raw data

I have raw NV12 YUV progressive Data and required to split each frame into images with even and odd fields (interlaced data).
If you want to do all the jobs manally:
Extract each frame from .yuv file
Depending on the format and resolution of your stream, caculate the size of one frame. Then, you can do the extration.
Split .yuv frame into .yuv field
Caculate the size of each line, and split the frame by odd/even line. Please take care of the uv line if the format is yuv420.
Covert .yuv field to .bmp image
If the format is not yuv444, then convert it to yuv444 first. Then, do the yuv to rgb convertion, and store the image into .bmp format.
With the help of ffmpeg and ImageMagick, it can also be done (more easier) by two steps (supposing that the resolution of frame is 1920x1080 and field is 1920x540) :
Convert YUV to Images
ffmpeg -s 1920x1080 -i input.yuv frame_%3d.bmp
-pix_fmt can be used to specify the format(pixel layout) of .yuv file.
Split Images to Odd/Even
convert frame_000.bmp -define sample:offset=25 -sample 100%x50% frame_000_top.bmp
convert frame_000.bmp -define sample:offset=75 -sample 100%x50% frame_000_bot.bmp
These two commands can be found in the last part of de-interlace a video frame.

Convert to UIImage PNG format interlaced

Is it possible convert anNSData *dataImage witch was a format jpeg or png and convert it to PNG interlaced? i know about compression image UIImagePNGRepresentation but i think it convert just to non-interlaced PNG. So, how should i set option for an UIImage or NSData to interlaced PNG?
UIImagePNGRepresentation makes only non-interlaced png.
Nice question, but i think it is impossible using UIKit.
I think you should use libpng to create interlaced png.
Look at this article , there you can find Minimal Example of writing a PNG File
When you set png header in this method
png_set_IHDR(png_ptr, info_ptr, width, height,
bit_depth, color_type, PNG_INTERLACE_NONE,
PNG_COMPRESSION_TYPE_BASE, PNG_FILTER_TYPE_BASE);
try to change PNG_INTERLACE_NONE to PNG_INTERLACE_ADAM7_PASSES

The color of video is wrong when made from UIImage of the PNG files

I am taking a UIImage from Png file and feed it to the videoWriter.
avAdaptor appendPixelBuffer:pixelBuffer
When the end result video comes out, it seems to lacking the one color, missing the yellow color or something.
I take alook of the function that made the pixelbuffer out of the UIImage
CVPixelBufferCreateWithBytes(NULL,
myWidth,
myHeight,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
I also try the kCVPixelFormatType_32AGRB and others, it didn't help.
any thoughts?
Please verify if your PNG image is with or without transparency element. If your PNG image doesn't contain transparency then it's 24-bit and not 32-bit per pixel.
Also, have you tried kCVPixelFormatType_32RGBA ?
Maybe the image sizes do not fit together.
Your input image should have the same width and height like the video output. If "myWidth" or "myHeight" is different (i.e. different aspect ratio) to the size of the image, one byte may be lost at the end of the line, which could lead to color shifting. kCVPixelFormatType_32BGRA seems to be the preferred (fastest) pixel format, so this should be okay.
There is no yellow color in RGB colorspace. This means yellow is only the >red< and >green< components. It seems that >blue< is missing.
I assume you are using a CFDataRef (maybe NSData) for the image. If it is an NSData object you can print the bytes to the debug console using
NSLog(#"data: %#", image);
This will print a hex dump to the console. Here you can see if you have alpha and what kind of byte order your png is alike. If your image has alpha, every fourth byte should be the same number.