I'm trying to make a video that consists of the RenderTextures.
I've written this from Unity Documentation,
but I want to append the next RenderTextures after I make a video.
Make encoder, AudioBuf as a member variable -> It leads to error that cannot create the .mp4 file or crashed on Editor.
Is there any method to keep the current .mp4 file handler for appending other RenderTextures after this function ends?
void EncodeVideoFromPredistortedImages(RenderTexture[] predistortedImages) {
// Compose the video again to encode from the Images list.
Texture2D convertedToTex2d = new Texture2D(predistortedImages[0].width, predistortedImages[0].height);
videoAttr.width = (uint)convertedToTex2d.width;
videoAttr.height = (uint)convertedToTex2d.height;
using (var encoder = new MediaEncoder(encodedVideoFilePath, videoAttr/*, audioAttr*/))
using (var audioBuf = new Unity.Collections.NativeArray<float>(sampleFramesPerVideoFrame, Unity.Collections.Allocator.Temp)) {
for (int i = 0; i < predistortedImages.Length; ++i) {
Debug.Log($"Current encoding idx {i} of {ExtractedTexturesArr.Length}");
RenderTexture prevRT = RenderTexture.active;
RenderTexture.active = predistortedImages[i];
convertedToTex2d.ReadPixels(new Rect(0, 0, predistortedImages[i].width, predistortedImages[i].height), 0, 0);
convertedToTex2d.Apply();
RenderTexture.active = prevRT;
encoder.AddFrame(convertedToTex2d);
encoder.AddSamples(audioBuf);
}
encoder.Dispose();
DestroyImmediate(convertedToTex2d);
}
I use OpenCV.Videos to deal with this problem instead of using unity 3rd party VideoWriter due to performance.
Related
I have been trying to change the format from a camera that give a texture in Alpha8 to RGBA and have been unsuccessful so far.
This is the code I've tried:
public static class TextureHelperClass
{
public static Texture2D ChangeFormat(this Texture2D oldTexture, TextureFormat newFormat)
{
//Create new empty Texture
Texture2D newTex = new Texture2D(2, 2, newFormat, false);
//Copy old texture pixels into new one
newTex.SetPixels(oldTexture.GetPixels());
//Apply
newTex.Apply();
return newTex;
}
}
And I'm calling the code like this:
Texture imgTexture = Aplpha8Texture.ChangeFormat(TextureFormat.RGBA32);
But the image gets corrupted and isn't visible.
Does anyone know how to change this Alpha8 to RGBA so I can process it like any other image in OpenCV?
A friend provided me with the answer:
Color[] cs =oldTexture.GetPixels();
for(int i = 0; i < cs.Length; i++){//we want to set the r g b values to a
cs[i].r = cs[i].a;
cs[i].g = cs[i].a;
cs[i].b = cs[i].a;
cs[i].a = 1.0f;
}
//set the pixels in the new texture
newTex.SetPixels(cs);
//Apply
newTex.Apply();
This will take alot of resources but it will work for sure.
If you know a better way to make this change please add an answer to this thread.
In Unity I have integrated Agora.io such that from within my virtual reality app, i can connect a video call to an outside user on a webpage. The VR user can see the website user, but the website user cannot see the VR user because there is no available physical camera to use. Is there a way to use a scene camera for the Agora video feed? This would mean that the website user would be able to see into the VR user's world
Yes. Although I haven't done projects in VR before, but the concept should be there. You may use the External Video Source to send any frames of the video as if it is sent from the physical camera. For Scene cameras, you may use a RenderTexture to output the camera feed, and extract the raw data from the RenderTexture. So the steps are:
Set up your camera to output to a RenderTexture (plus logic to display this RenderTexture somewhere locally if needed.)
Also make sure when you set up the Agora RTC engine, enable external video source using this call:
mRtcEngine.SetExternalVideoSource(true, false);
At each frame, extract the raw image data from the RenderTexture
Send the raw frame data to the SDK function rtc.pushVideoFrame()
You may find the code for the last step here
https://gist.github.com/icywind/92053d0983e713515c64d5c532ebee21
I modified the sharescreen code Agora io edited to extract a render texture. The problem is I only get a white or black screen on the receiver while my render texture is a depth cam video flow.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using agora_gaming_rtc;
using UnityEngine.UI;
using System.Globalization;
using System.Runtime.InteropServices;
using System;
public class ShareScreen : MonoBehaviour
{
Texture2D mTexture;
Rect mRect;
[SerializeField]
private string appId = "Your_AppID";
[SerializeField]
private string channelName = "agora";
public IRtcEngine mRtcEngine;
int i = 100;
public RenderTexture depthMap;
void Start()
{
Debug.Log("ScreenShare Activated");
mRtcEngine = IRtcEngine.getEngine(appId);
mRtcEngine.SetLogFilter(LOG_FILTER.DEBUG | LOG_FILTER.INFO | LOG_FILTER.WARNING | LOG_FILTER.ERROR | LOG_FILTER.CRITICAL);
mRtcEngine.SetParameters("{\"rtc.log_filter\": 65535}");
mRtcEngine.SetExternalVideoSource(true, false);
mRtcEngine.EnableVideo();
mRtcEngine.EnableVideoObserver();
mRtcEngine.JoinChannel(channelName, null, 0);
mRect = new Rect(0, 0, depthMap.width, depthMap.height);
mTexture = new Texture2D((int)mRect.width, (int)mRect.height, TextureFormat.RGBA32, false);
}
void Update()
{
//Start the screenshare Coroutine
StartCoroutine(shareScreen());
}
//Screen Share
IEnumerator shareScreen()
{
yield return new WaitForEndOfFrame();
//FB activate automaticaly the render texture for the copy
RenderTexture.active = depthMap;
//Read the Pixels inside the Rectangle
mTexture.ReadPixels(mRect, 0, 0);
//Apply the Pixels read from the rectangle to the texture
mTexture.Apply();
// Get the Raw Texture data from the the from the texture and apply it to an array of bytes
byte[] bytes = mTexture.GetRawTextureData();
// Make enough space for the bytes array
int size = Marshal.SizeOf(bytes[0]) * bytes.Length;
// Check to see if there is an engine instance already created
IRtcEngine rtc = IRtcEngine.QueryEngine();
//if the engine is present
if (rtc != null)
{
//Create a new external video frame
ExternalVideoFrame externalVideoFrame = new ExternalVideoFrame();
//Set the buffer type of the video frame
externalVideoFrame.type = ExternalVideoFrame.VIDEO_BUFFER_TYPE.VIDEO_BUFFER_RAW_DATA;
// Set the video pixel format
externalVideoFrame.format = ExternalVideoFrame.VIDEO_PIXEL_FORMAT.VIDEO_PIXEL_BGRA;
//apply raw data you are pulling from the rectangle you created earlier to the video frame
externalVideoFrame.buffer = bytes;
//Set the width of the video frame (in pixels)
externalVideoFrame.stride = (int)mRect.width;
//Set the height of the video frame
externalVideoFrame.height = (int)mRect.height;
//Remove pixels from the sides of the frame
externalVideoFrame.cropLeft = 0;
externalVideoFrame.cropTop = 0;
externalVideoFrame.cropRight = 0;
externalVideoFrame.cropBottom = 0;
//Rotate the video frame (0, 90, 180, or 270)
externalVideoFrame.rotation = 180;
// increment i with the video timestamp
externalVideoFrame.timestamp = i++;
//Push the external video frame with the frame we just created
int a = rtc.PushVideoFrame(externalVideoFrame);
Debug.Log(" pushVideoFrame = " + a);
}
}
}
I'm working upon an algorithm that is based on Hand gesture recognition. This algorithm i found and run on Winform with c# scripts. The same technique i need to use in my game to perform hand gesture detection through webcam. I tried to use the algorithm in my game scripts but unable to capture any image using the algorithm. Below is the code that i'm currently working upon. I'm using aForge.net framework to implement the idea of motion detection. The bitmap image always returns null. However using the same algorithm in winform it captures image on every frame changed. I know there is a technique of using PhotoCapture in unity but i'm not sure how do i use it at runtime on every frame. Every guidance is appreciated. Thanks!
OpenCamera.cs
using AForge.GestureRecognition;
using System.Collections;
using System.Collections.Generic;
using System.Drawing;
using UnityEngine;
using System.Drawing.Design;
using UnityEngine.VR.WSA.WebCam;
using System.Linq;
using System;
public class OpenCamera : MonoBehaviour {
// statistics length
private const int statLength = 15;
Bitmap image;
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
// current statistics index
private int statIndex = 0;
// ready statistics values
private int statReady = 0;
// statistics array
private int[] statCount = new int[statLength];
private GesturesRecognizerFromVideo gesturesRecognizer = new GesturesRecognizerFromVideo();
private Gesture gesture = new Gesture();
private int gestureShowTime = 0;
// Use this for initialization
void Start()
{
WebCamTexture webcamTexture = new WebCamTexture();
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = webcamTexture;
webcamTexture.Play();
}
// Update is called once per frame
void Update ()
{
gesturesRecognizer.ProcessFrame(ref image);
// check if we need to draw gesture information on top of image
if (gestureShowTime > 0)
{
if ((gesture.LeftHand == HandPosition.NotRaised) || (gesture.RightHand != HandPosition.NotRaised))
{
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(image);
string text = string.Format("Left = " + gesture.LeftHand + "\nRight = " + gesture.RightHand);
System.Drawing.Font drawFont = new System.Drawing.Font("Courier", 13, System.Drawing.FontStyle.Bold);
SolidBrush drawBrush = new SolidBrush(System.Drawing.Color.Blue);
g.DrawString(text, drawFont, drawBrush, new PointF(0, 5));
drawFont.Dispose();
drawBrush.Dispose();
g.Dispose();
}
gestureShowTime--;
}
}
}
As mentioned in the comments (which I am not able to reply to at the moment) those libraries always heavely depend on system.drawing libraries, which are kind of windows only.
What you could try is downloading the drawing library dll (or copying from your system folder) into your unity project (it's managed code, so it'll run on every exportable platform) to keep teh references. Then you could capture a texture2d every frame (rendertextures or so) and draw those pixels into a bitmap which is fed to the library.
Note that copying thousands of pixels every frame is really really heavy. you'd have to convert the UnityEngine.Color to System.Drawing.Color...
This is the easies solution that might work. But definitly not a good final one.
I am very new to Unity3d.
I have a JSON array that contains the parameters of the prefabs I want to create at runtime.
I want to display images that are stored on my server in the scene.
I have a prefab "iAsset" that has a plane (mesh filter) and I want to load the image files as the texture of the plane.
I am able to Instatiate the prefabs however the prefab is showing up as a white square. This is my code:
for(var i = 0; i < bookData.Assets.Count; i++){
GameObject newAsset = null;
newAsset = (GameObject)Instantiate(iasset, new Vector3(2*i, 0, 0), Quaternion.identity);
if(!imageAssetRequested )
{
remoteImageAsset = new WWW(((BookAssets)bookData.Assets[i]).AssetContent);
imageAssetRequested = true;
}
if(remoteImageAsset.progress >=1)
{
if(remoteImageAsset.error == null)
{
loadingRemoteAsset = false;
newAsset.renderer.material.mainTexture = remoteImageAsset.texture;
}
}
}
the urls to the images on my server is retrieved from the JSON array:
((BookAssets)bookData.Assets[i]).AssetContent);
The code builds without any errors, I would very much appreciate any help to display the remote images.
You are not waiting for your downloads to complete.
The WWW class is asynchronous, and will commence the download. However, you need to either poll it (using the code you do have above) at a later time, or use a yield WWW in a CoRoutine that will block your execution (within that CoRoutine) until the download finishes (either successfully or due to failure).
Refer to Unity Documentation for WWW
Note however, that page sample code is wrong, and Start is not a CoRoutine / IEnumarator. Your code would look something like :
void Start()
{
... your code above ...
StartCoroutine(DownloadImage(bookData.Assets[i]).AssetContent, newAsset.renderer.material.mainTexture));
}
IEnumerator DownloadImage(string url, Texture tex)
{
WWW www = new WWW(url);
yield return www;
tex.LoadImage(www.bytes)
}
I have tried to search for this question a lot, but never have seen any satisfactory answers, so now I have a last hope here.
I have an onPreviewFrame callback set up. Which gives a byte[] of raw frames with supported preview format(NV21 with H.264 encoded type).
Now, the problem is callback always starts giving byte[] frames from a fixed orientation, whenever device rotates it doesn't reflect to captured byte[] frames. I have tried with setDisplayOrientation and setRotation but these api's are only reflecting to preview which is being displayed not at all to the captured byte [] frames.
Android docs even says, Camera.setDisplayOrientation only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
Finally Is there a way, at any API level, to change the orientation of the byte[] frames?
One possible way if you don't care about the format is to the use YuvImage class to get a JPEG buffer, use this buffer to create a Bitmap and rotate it to the corresponding angle. Something like that:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Size previewSize = camera.getParameters().getPreviewSize();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] rawImage = null;
// Decode image from the retrieved buffer to JPEG
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuv.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), YOUR_JPEG_COMPRESSION, baos);
rawImage = baos.toByteArray();
// This is the same image as the preview but in JPEG and not rotated
Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length);
ByteArrayOutputStream rotatedStream = new ByteArrayOutputStream();
// Rotate the Bitmap
Matrix matrix = new Matrix();
matrix.postRotate(YOUR_DEFAULT_ROTATION);
// We rotate the same Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, previewSize.width, previewSize.height, matrix, false);
// We dump the rotated Bitmap to the stream
bitmap.compress(CompressFormat.JPEG, YOUR_JPEG_COMPRESSION, rotatedStream);
rawImage = rotatedStream.toByteArray();
// Do something we this byte array
}
I have modified the onPreviewFrame method of this Open Source Android Touch-To-Record library to take transpose and resize a captured frame.
I defined "yuvIplImage" as following in my setCameraParams() method.
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
This is my onPreviewFrame() method:
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
This code uses a method "YUV_NV21_TO_BGR", which I found from this link
Basically this method is used to resolve, which I call as, "The Green Devil problem on Android". You can see other android devs facing the same problem on other SO threads. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to #David Han from above google groups thread.
Also you should know that all this processing (transpose, flip and resize), in onPreviewFrame, takes much time which causes you a very serious hit on your Frames Per Second (FPS) rate. When I used this code, inside onPreviewFrame method, the resulting FPS of the recorded video was down to 3 frames/sec from 30fps.
I would advise not to use this approach. Rather you can go for post-recording processing (transpose, flip and resize) of your video file using JavaCV in an AsyncTask. Hope this helps.