How to improve the image quality while sharing the screen using agora sdk with unity. I've used below
settings for VideoProfile as
mRtcEngine.SetVideoEncoderConfiguration(new VideoEncoderConfiguration()
{
// Sets the video encoding bitrate (Kbps).
minBitrate = 100,
bitrate = 1130,
// Sets the video frame rate.
minFrameRate = 10,
frameRate = FRAME_RATE.FRAME_RATE_FPS_24,
// Sets the video resolution.
dimensions = new VideoDimensions() { width = EncodeWidth, height = EncodeHeight },
// Sets the video encoding degradation preference under limited bandwidth. MIANTAIN_QUALITY means to degrade the frame rate to maintain the video quality.
degradationPreference = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY,
// Note if your remote user video surface to set to flip Horizontal, then we should flip it before sending
mirrorMode = VIDEO_MIRROR_MODE_TYPE.VIDEO_MIRROR_MODE_ENABLED,
// Sets the video orientation mode of the video
orientationMode = ORIENTATION_MODE.ORIENTATION_MODE_FIXED_PORTRAIT
});
the Output from Editor to Device looks like below:
And Ouput from Device to Editor or another Deivce looks blurry as below:
Ive tested with WIFI on both device and ensure with good quality and also with forced settings as Image Quality than Frame Rate.
mRtcEngine.SetVideoQualityParameters(false);
mRtcEngine.EnableDualStreamMode(false);
mRtcEngine.SetRemoteDefaultVideoStreamType(REMOTE_VIDEO_STREAM_TYPE.REMOTE_VIDEO_STREAM_HIGH);
Do i missed anything else to improve the image quality?
How to share a Rect part of the screen and this rect can be draggable by user at part of the screen
Blurry videos may be caused by low bitrates and resolution ratios. Check the following:
Check videoProfile. If possible, set videoProfile to a higher level
to see whether the video is clearer.
Check the stream type of the receiver. If the stream type is low, call the setRemoteVideoStreamType method to switch from a low stream to high stream. (You did this)
Switch to another WiFi network to ensure that the blurry video is not caused by poor Internet connections.
Turn off all pre-processing options.
If this issue persists, contact Agora customer support (via ticket system) with the following information:
The uid of the user who sees the blurry video.
The time frame during which the blurry video appears.
SDK logs and screen recording files of the user.
You can check the statistics of every call in Agora Analytics in Dashboard.
Related
I try to use SetExternalVideoSource and PushVideoFrame to send custom video frames to the RTC engine with methods described here. However, the video quality is not as good as using the default video streaming options despite that I am pushing video frames with the same resolution. Does anyone notice this difference before? I wonder if this is expected? Or maybe there is a way to set the custom video quality, but I overlooked?
Hope the slack channel answer helped you on this. For others reading this, please see what the discussion was:
"You should give a resolution configuration. Here is the config I use in my advanced demo app:"
mRtcEngine.SetVideoEncoderConfiguration(new VideoEncoderConfiguration()
{
bitrate = 1130,
frameRate = FRAME_RATE.FRAME_RATE_FPS_15,
dimensions = new VideoDimensions() { width = Screen.width, height = Screen.height },
// Note if your remote user video surface to set to flip Horizontal, then we should flip it before sending
mirrorMode = VIDEO_MIRROR_MODE_TYPE.VIDEO_MIRROR_MODE_ENABLED
});
I recording video using MediaRecorder.When using back-camera,it working fine,but when using front camera,the video captured is being flipped/inverse.Means that the item in right,will appear on the left.The camera preview is working fine,just final captured video flipped.
Here is the camera preview looks like
But the final video appear like this(all the item in left hand side,appear on right hand side)
What I tried so far:
I tried to apply the matrix when prepare recorder,but it seems does change anything.
private boolean prepareRecorder(int cameraId){
//# Create a new instance of MediaRecorder
mRecorder = new MediaRecorder();
setCameraDisplayOrientation(this,cameraId,mCamera);
int angle = getVideoOrientationAngle(this,cameraId);
mRecorder.setOrientationHint(angle);
if(cameraId == Camera.CameraInfo.CAMERA_FACING_FRONT){
Matrix matrix = new Matrix();
matrix.preScale(1.0f,-1.0f);
}
//all other code to prepare recorder here
}
I already read for all this question below,but all this seems didnt solve my problem.For information,I using SurfaceView for the camera preview,so this question here doesn't help.
1) Android flip front camera mirror flipped video
2) How to keep android from inverting the image from the front facing camera?
3) Prevent flipping of the front facing camera
So my question is :
1) How to capture a video by front camera which the video not being inverse(exactly the same with camera preview)?
2) How to achieve this when the Camera preview is using SurfaceView but not TextureView ? (cause all the question I mention above,tell about using TextureView)
All possible solution is mostly welcome..Tq
EDIT
I made 2 short video clip to clarify the problem,please download and take a look
1) The video during camera preview of recording
2) The video of the final product of recording
So, if the system camera app produces video similar to your app, you didn't do something wrong. Now it's time to understand what happens to front-facing camera video recording.
The front facing camera is not different from the rear facing camera in the way it captures still pictures or video. There is a difference how the phone displays camera preview on the screen. To make it look more natural to the user, Android (and all other systems) mirrors the preview, so that you can see yourself as if in a mirror.
It is important to understand that this only applies to the way the preview is presented to you. If you pick up any video conferencing app, connect two devices that you hold in two hands, and look at yourself, you will see to your surprise that the two instances of yourself are flipped.
This is not a bug, this is the natural way to present the video to the other party.
See the sketch:
This is how you see the scene:
This is how your peer sees the same scene
Normally, recording of a video is done from the point if view of your peer, as in the second picture. This is the natural setup for, e.g., video conferencing.
But Snapchat and some other social apps choose to store the front-facing video clip as if you record it from the mirror (as if the recorder is in your hand on the first picture). Some people like this feature, others hate it (see https://forums.androidcentral.com/general-help-how/664539-front-camera-pics-mirrored-reversed-only-snapchat.html and https://www.reddit.com/r/nexus6/comments/3846ay/has_anyone_found_a_fix_for_snapchat_flipping)
You cannot use MediaRecorder for that. You can use the lower-level API of MediaCodec to record processed frames. You need to flip each frame 'manually', and this may be a significant performance hit, because normally the MediaRecorder 'connects' the camera to hardware encoder in a very efficient way, without need even to copy the pixels to user memory. This answer shows how you can manipulate the way camera is rendered to texture.
You can achieve this by recording video manually from surface view.
In such case preview and recording will match exactly.
I've been using this library for this purpose:
https://github.com/spaceLenny/recordablesurfaceview
Here is the guide how to use it (not with camera but with OpenGL drawing): https://withintent.uncorkedstudios.com/recording-screen-video-on-android-with-recordablesurfaceview-451c9daa213e
I am using cocos2d's CCRenderTexture to record video of my game. But if recording video in retina display resolution will cost lot of CPU and memory, so I want to use low resolution for video record but keep retina-resolution for normal game play. is it possible?
I've tried "[[CCDirector sharedDirector] enableRetinaDisplay:NO];" during record video, but it seems not work. the generated output totally wrong.
This is not feasible.
You'd have to render each frame twice, once on the screen, then onto the render texture. A serious drop in framerate is inevitable even if you lower the resolution of the render texture somehow.
The reason is simply that you'll also have to write each render texture as an image to flash memory. This is extremely slow. You'll also end up with a huge amount of data. If each (PNG/JPG) image file ends up being a reasonably small 50 KB then one second of recorded data at 60 fps will consume 3 Megabytes of flash memory. One minute would be around 180 Megabytes.
To record a demo of your game, most games follow the simple principle of recording the user input, and then playing back the user input as if the user had issued these commands. This requires careful planning, no breaking changes when updating the app (or invalidating old demos), and no use of non-deterministic randomizers (ie seeded with time).
If you need to record a demo for making a trailer video, there's plenty of screengrabbing solutions around. Some even specialize in grabbing iPhone video, either from the device (usually requires a source code/library component) or from the Simulator.
You should check out Kamcord SDK for recording game play. Check at http://kamcord.com/
Kamcord has a built-in gameplay video and audio recording technology for iOS. It allows you, the game developer, to capture gameplay videos with an API. Your users can then replay and share these gameplay videos via YouTube, Facebook, Twitter, and email.
I'm trying to do some image processing on iPhone.
I'm using http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html to capture the camera frames.
I saw that I can set AVCaptureVideoDataOutput image format using setVideoSettings, but is it possible to get the images in lower resolution?
If not, is the an efficient way to downscale the resulted image?
Thanks,
Asaf.
This is how we can get a lower resolution output so we get a higher FPS when manipulating the image:
// sessionPreset governs the quality of the capture. we don't need high-resolution images,
// so we'll set the session preset to low quality.
self.captureSession.sessionPreset = AVCaptureSessionPresetLow;
Asaf Pinhassi.
Now that Apple is officially allowing UIGetScreenImage() to be used in iPhone apps, I've seen a number of blogs saying that this "opens the floodgates" for video capture on iPhones, including older models. But I've also seen blogs that say the fastest frame rate they can get with UIGetScreenImage() is like 6 FPS.
Can anyone share specific frame-rate results you've gotten with UIGetScreenImage() (or other approved APIs)? Does restricting the area of the screen captured improve frame rate significantly?
Also, for the wishful thinking segment of today's program, does anyone have pointers to code/library that uses UIGetScreenImage() to capture video? For instance, I'd like an API something like Capture( int fps, Rect bounds, int durationMs ) that would turn on the camera and for the given duration record a sequence of .png files at the given frame rate, copying from the given screen rect.
There is no specific frame rate. UIGetScreenImage() is not a movie recorder. It just try to return as soon as it could, unfortunately still very slow.
Restricting the area of the screen captured is useless. UIGetScreenImage doesn't take any input parameters. Cropping the output image could make the frame rate even worse due to the excess work.
UIGetScreenImage() returns an image of current screen display. It's said to be slow but whether it's fast enough depends on the use case. The video recording app iCamcorder is using this function.
According to their blog,
iCamcorder records at an remarkable average minimum of 10 frames per second and a maximum of 15 frames per second.
The UIGetScreenImage method Apple recently allowed developers to use captures the current screen contents. Unfortunately it is really slow, about 15% of the processing time of the App just goes into calling this method. http://www.drahtwerk.biz/EN/Blog.aspx/iCamcorder-v19-and-Giveaway/?newsID=27
So the raw performance of UIGetScreenImage() should be at least much higher than 15 fps.
To crop the returned image, you can try
extern CGImageRef UIGetScreenImage(void);
...
CGImageRef cgoriginal = UIGetScreenImage();
CGImageRef cgimg = CGImageCreateWithImageInRect(cgoriginal, rect);
UIImage *viewImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgoriginal);
CGImageRelease(cgimg);