How to change orientation of captured byte[] frames through onPreviewFrame callback? - android-camera

I have tried to search for this question a lot, but never have seen any satisfactory answers, so now I have a last hope here.
I have an onPreviewFrame callback set up. Which gives a byte[] of raw frames with supported preview format(NV21 with H.264 encoded type).
Now, the problem is callback always starts giving byte[] frames from a fixed orientation, whenever device rotates it doesn't reflect to captured byte[] frames. I have tried with setDisplayOrientation and setRotation but these api's are only reflecting to preview which is being displayed not at all to the captured byte [] frames.
Android docs even says, Camera.setDisplayOrientation only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
Finally Is there a way, at any API level, to change the orientation of the byte[] frames?

One possible way if you don't care about the format is to the use YuvImage class to get a JPEG buffer, use this buffer to create a Bitmap and rotate it to the corresponding angle. Something like that:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Size previewSize = camera.getParameters().getPreviewSize();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] rawImage = null;
// Decode image from the retrieved buffer to JPEG
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuv.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), YOUR_JPEG_COMPRESSION, baos);
rawImage = baos.toByteArray();
// This is the same image as the preview but in JPEG and not rotated
Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length);
ByteArrayOutputStream rotatedStream = new ByteArrayOutputStream();
// Rotate the Bitmap
Matrix matrix = new Matrix();
matrix.postRotate(YOUR_DEFAULT_ROTATION);
// We rotate the same Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, previewSize.width, previewSize.height, matrix, false);
// We dump the rotated Bitmap to the stream
bitmap.compress(CompressFormat.JPEG, YOUR_JPEG_COMPRESSION, rotatedStream);
rawImage = rotatedStream.toByteArray();
// Do something we this byte array
}

I have modified the onPreviewFrame method of this Open Source Android Touch-To-Record library to take transpose and resize a captured frame.
I defined "yuvIplImage" as following in my setCameraParams() method.
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
This is my onPreviewFrame() method:
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
This code uses a method "YUV_NV21_TO_BGR", which I found from this link
Basically this method is used to resolve, which I call as, "The Green Devil problem on Android". You can see other android devs facing the same problem on other SO threads. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to #David Han from above google groups thread.
Also you should know that all this processing (transpose, flip and resize), in onPreviewFrame, takes much time which causes you a very serious hit on your Frames Per Second (FPS) rate. When I used this code, inside onPreviewFrame method, the resulting FPS of the recorded video was down to 3 frames/sec from 30fps.
I would advise not to use this approach. Rather you can go for post-recording processing (transpose, flip and resize) of your video file using JavaCV in an AsyncTask. Hope this helps.

Related

Why is the file image size of the texture different from the size when loaded in Unity?

When I used 5.01KB of texture 'Texture2D.LoadTexture()' I expected about 8KB.
but this memory alloc is 2.8MB
why texture alloc memory is more size?
Sample Code
private Texture2D[] m_SaveTextures = null;
public void Start()
{
string path = "../Directory/";
m_SaveTextures = new Texture2D[]
{
GetTexture(path + "Red.png"),
GetTexture(path + "Green.png"),
GetTexture(path + "Blue.png"),
GetTexture(path + "Black.png")
};
}
private Texture2D GetTexture(string path)
{
if (!File.Exists(path)) return null;
var texture = new Texture2D(1, 1, TextureFormat.ARGB32, false)
{
name = Path.GetFileNameWithoutExtension(path)
};
texture.LoadImage(File.ReadAllBytes(path));
return texture;
}
Example image..
Unity Profiler check image
Because png is a compressed image file, but the texture you are loading it into is not. TextureFormat.ARGB32 is not a compressed format, so the memory cost is 32 bits per pixel.
You can lower the memory footprint by using one of the compressed formats, like DXT5, if your target platform supports it.

append the next frames into video when use MediaEncoder in Unity?

I'm trying to make a video that consists of the RenderTextures.
I've written this from Unity Documentation,
but I want to append the next RenderTextures after I make a video.
Make encoder, AudioBuf as a member variable -> It leads to error that cannot create the .mp4 file or crashed on Editor.
Is there any method to keep the current .mp4 file handler for appending other RenderTextures after this function ends?
void EncodeVideoFromPredistortedImages(RenderTexture[] predistortedImages) {
// Compose the video again to encode from the Images list.
Texture2D convertedToTex2d = new Texture2D(predistortedImages[0].width, predistortedImages[0].height);
videoAttr.width = (uint)convertedToTex2d.width;
videoAttr.height = (uint)convertedToTex2d.height;
using (var encoder = new MediaEncoder(encodedVideoFilePath, videoAttr/*, audioAttr*/))
using (var audioBuf = new Unity.Collections.NativeArray<float>(sampleFramesPerVideoFrame, Unity.Collections.Allocator.Temp)) {
for (int i = 0; i < predistortedImages.Length; ++i) {
Debug.Log($"Current encoding idx {i} of {ExtractedTexturesArr.Length}");
RenderTexture prevRT = RenderTexture.active;
RenderTexture.active = predistortedImages[i];
convertedToTex2d.ReadPixels(new Rect(0, 0, predistortedImages[i].width, predistortedImages[i].height), 0, 0);
convertedToTex2d.Apply();
RenderTexture.active = prevRT;
encoder.AddFrame(convertedToTex2d);
encoder.AddSamples(audioBuf);
}
encoder.Dispose();
DestroyImmediate(convertedToTex2d);
}
I use OpenCV.Videos to deal with this problem instead of using unity 3rd party VideoWriter due to performance.

AndroidX Camera Core ImageAnalysis.Analyser results in distorted image

I am using ImageAnalysis library to extract live previews to then barcode scanning and OCR on.
I'm not having any issues with barcode scanning at all, but OCR is resulting in some weak results. I'm sure this could be from a few reasons. My current attempt at working on the solution is to send the frames to GCP - Storage before I run OCR (or barcode) on the frames in order to look at them in bulk. All of them look very similar:
My best guess is the way i'm processing the frames could be causing the pixels to be organized in the buffer incorrectly (i'm inexperienced to Android - sorry). Meaning rather than organizing 0,0 then 0,1.....it's randomly taking pixels and putting them in random areas. I can't figure out where this is happening though. Once I can look at the image quality, then i'll be able to analyze what the issue is with OCR but this is my current blocker unfortunately.
Extra note: I am uploading the image to GCP - Storage prior to even running OCR, so for the sake of looking at this, we can ignore the OCR statement I made - I just wanted to give some background.
Below is the code where I initiate the camera and analyzer then observe the frames
private void startCamera() {
//make sure there isn't another camera instance running before starting
CameraX.unbindAll();
/* start preview */
int aspRatioW = txView.getWidth(); //get width of screen
int aspRatioH = txView.getHeight(); //get height
Rational asp = new Rational (aspRatioW, aspRatioH); //aspect ratio
Size screen = new Size(aspRatioW, aspRatioH); //size of the screen
//config obj for preview/viewfinder thingy.
PreviewConfig pConfig = new PreviewConfig.Builder().setTargetResolution(screen).build();
Preview preview = new Preview(pConfig); //lets build it
preview.setOnPreviewOutputUpdateListener(
new Preview.OnPreviewOutputUpdateListener() {
//to update the surface texture we have to destroy it first, then re-add it
#Override
public void onUpdated(Preview.PreviewOutput output){
ViewGroup parent = (ViewGroup) txView.getParent();
parent.removeView(txView);
parent.addView(txView, 0);
txView.setSurfaceTexture(output.getSurfaceTexture());
updateTransform();
}
});
/* image capture */
//config obj, selected capture mode
ImageCaptureConfig imgCapConfig = new ImageCaptureConfig.Builder().setCaptureMode(ImageCapture.CaptureMode.MAX_QUALITY)
.setTargetRotation(getWindowManager().getDefaultDisplay().getRotation()).build();
final ImageCapture imgCap = new ImageCapture(imgCapConfig);
findViewById(R.id.imgCapture).setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Log.d("image taken", "image taken");
}
});
/* image analyser */
ImageAnalysisConfig imgAConfig = new ImageAnalysisConfig.Builder().setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE).build();
ImageAnalysis analysis = new ImageAnalysis(imgAConfig);
analysis.setAnalyzer(
Executors.newSingleThreadExecutor(), new ImageAnalysis.Analyzer(){
#Override
public void analyze(ImageProxy imageProxy, int degrees){
Log.d("analyze", "just analyzing");
if (imageProxy == null || imageProxy.getImage() == null) {
return;
}
Image mediaImage = imageProxy.getImage();
int rotation = degreesToFirebaseRotation(degrees);
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(toBitmap(mediaImage));
if (!isMachineLearning){
Log.d("analyze", "isMachineLearning is about to be true");
isMachineLearning = true;
String haha = MediaStore.Images.Media.insertImage(getContentResolver(), toBitmap(mediaImage), "image" , "theImageDescription");
Log.d("uploadingimage: ", haha);
extractBarcode(image, toBitmap(mediaImage));
}
}
});
//bind to lifecycle:
CameraX.bindToLifecycle(this, analysis, imgCap, preview);
}
Below is how I structure my detection (pretty straightforward and simple):
FirebaseVisionBarcodeDetectorOptions options = new FirebaseVisionBarcodeDetectorOptions.Builder()
.setBarcodeFormats(FirebaseVisionBarcode.FORMAT_ALL_FORMATS)
.build();
FirebaseVisionBarcodeDetector detector = FirebaseVision.getInstance().getVisionBarcodeDetector(options);
detector.detectInImage(firebaseVisionImage)
Finally, when I'm uploading the image to GCP - Storage, this is what it looks like:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 100, baos); //bmp being the image that I ran barcode scanning on - as well as OCR
byte[] data = baos.toByteArray();
UploadTask uploadTask = storageRef.putBytes(data);
Thank you all for your kind help (:
My problem was that I was trying to convert to a bitmap AFTER barcode scanning. The conversion wasn't properly written but I found a way around without having to write my own bitmap conversion function (though I plan on going back to it as I see myself needing it, and genuine curiosity wants me to figure it out)

Unity texture from disk has low resolution

I am trying to load a texture(and create a sprite from it eventually) from disk but sprite renders as low resolution image.
What I am doing:
-> Download the image from url. Once the image is downloaded, I save the texture as png to disk so that next time it doesn't requires a download.
WWW www = new WWW(url);
yield return www;
if (www.isDone)
{
if (string.IsNullOrEmpty(www.error))
{
Sprite img = Sprite.Create(www.texture, new Rect(0, 0, www.texture.width, www.texture.height), new Vector2(0, 0));
reward.RewardSprite = img;
byte[] bytes = www.texture.EncodeToPNG();
FileManager.SaveRewardImage(reward.rewardId, bytes);
}
else
{
Debug.Log(www.error);
}
}
-> Load from disk
string path = string.Format("Cache\\Venue\\{0}", nameWithoutExtension);
return Resources.Load<Texture2D>(path);
The first time when the texture loads from url, its resolution seems fine(because its the original one). When it loads from cache, it attenuates to a lower one.
Can someone tell me what am I missing, or even if there is way around it?
Thanks in advance.
You can overload your texture in your sprite creation in the same way that you do with the rectangle and specify in TextureFormat the format you need:
Sprite img = Sprite.Create(
new Texture2D (www.texture.width, www.texture.height, TextureFormat format, bool mipmap),
new Rect(0, 0, www.texture.width, www.texture.height), new Vector2(0, 0));

Scale a PNG in Unity5? - Bountie

Surprisingly in Unity, for years the only way to simply scale an actual PNG is to use the very awesome library http://wiki.unity3d.com/index.php/TextureScale
Example below
How do you scale a PNG using Unity5 functions? There must be a way now with new UI and so on.
So, scaling actual pixels (such as in Color[]) or literally a PNG file, perhaps downloaded from the net.
(BTW if you're new to Unity, the Resize call is unrelated. It merely changes the size of an array.)
public WebCamTexture wct;
public void UseFamousLibraryToScale()
{
// take the photo. scale down to 256
// also crop to a central-square
WebCamTexture wct;
int oldW = wct.width; // NOTE example code assumes wider than high
int oldH = wct.height;
Texture2D photo = new Texture2D(oldW, oldH,
TextureFormat.ARGB32, false);
//consider WaitForEndOfFrame() before GetPixels
photo.SetPixels( 0,0,oldW,oldH, wct.GetPixels() );
photo.Apply();
int newH = 256;
int newW = Mathf.FloorToInt(
((float)newH/(float)oldH) * oldW );
// use a famous Unity library to scale
TextureScale.Bilinear(photo, newW,newH);
// crop to central square 256.256
int startAcross = (newW - 256)/2;
Color[] pix = photo.GetPixels(startAcross,0, 256,256);
photo = new Texture2D(256,256, TextureFormat.ARGB32, false);
photo.SetPixels(pix);
photo.Apply();
demoImage.texture = photo;
// consider WriteAllBytes(
// Application.persistentDataPath+"p.png",
// photo.EncodeToPNG()); etc
}
Just BTW it occurs to me I'm probably only talking about scaling down here (as you often have to do to post an image, create something on the fly or whatever.) I guess, there would not often be a need to scale up in size an image; it's pointless quality-wise.
If you're okay with stretch-scaling, actually there's simpler way by using a temporary RenderTexture and Graphics.Blit. If you need it to be Texture2D, swapping RenderTexture.active temporarily and read its pixels to Texture2D should do the trick. For example:
public Texture2D ScaleTexture(Texture src, int width, int height){
RenderTexture rt = RenderTexture.GetTemporary(width, height);
Graphics.Blit(src, rt);
RenderTexture currentActiveRT = RenderTexture.active;
RenderTexture.active = rt;
Texture2D tex = new Texture2D(rt.width,rt.height);
tex.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
tex.Apply();
RenderTexture.ReleaseTemporary(rt);
RenderTexture.active = currentActiveRT;
return tex;
}

Categories