MediaCodec for Simultaneous Camera - android-camera

I am working on development for simultaneous camera streaming and the recording using MediaCodec API. I want to merge the frame from both the camera and give to rendering as well as to Mediacodec for recording as surface.
I do not want to create multiple EGLContext Rather same should be used across.
I am taking Bigflake media codec example as reference however i am not clear whether it is possible or not. Also how to bind the multiple textures? As we require two textures for two camera.
Your valueable input will help me to progress further. Currently i am stuck and not clear what to do further.
regards
Nehal

Related

iPhone TrueDepth front camera innacurate face tracking - skewed transformation

I am using an app that was developed using the ARKit framework. More specifically, I am interested in the 3D facial mesh and the face orientation and position with respect to the phone's front camera.
Having said that I record video with subjects performing in front of the front camera. During these recordings, I have noticed that some videos resulted in inaccurate transformations with the face being placed in the back of the camera whereas the rotation being skewed (not orthogonal basis).
I do not have a deep understanding of how the TrueDepth camera combines all its sensors to track and reconstruct the 3D facial structure and so I do not know what could potentially cause this issue. Although I have experimented with different setups e.g. different subjects, with and without a mirror, screen on and off, etc. I still have not been able to identify the source of the inaccurate transformation. Could it be a combination of the camera angle interfering with the mirror?
Below I attached two recordings of myself that resulted in incorrect (above) and correct (below) estimated transformations.
Do you have any idea of what might be the problem? Thank you in advance.

Switch between AVCaptureSession and ARKit - do I need to recalibrate the AR session?

I am working on a project where I need to take high quality photos( ReplayKit quality is not enough) and combine them with positions from ARKit frame. I need to take about 10 photos with positions and each of those photos should be in the same coordinate space.
Since it's impossible to use ARKit and AVCatpureSession simultaneously, I'm thinking about getting position from ArKit, pausing AR, taking a photo via new AVCaptureSession, and playing back again the AR session.
The question though is if it's possible to resume AR session, without having too big drift to origin point of first session ?
It would be great to confirm, before implementing this experiment.
Thanks !
I think this is generally difficult (some say you can, some say you can't).
In theory though if you have a camera on a tripod and can keep it really still then you could save the position and rotation of the camera when you stop the session, then create a new session and load these parameters to use as offset.
I.e. in the second session your position is:
new_position = position + old_position
This will obviously only work if you can really minimise any movement between sessions (like with a tripod and remote trigger).

Keynote/Powerpoint in Unity

Situation:
I am working on a project that allow the user to practice presentations in a VR-Room. This includes the use of Powerpoint/Keynote, which is displayed on a plane. Image display is easy possible, just as video's.
Problem:
There's the problem. Images don't contain movement, but a powerpoint/keynote file often does. Since Unity does not support the file extension of powerpoint and keynote. Exporting to HTML and programming our own phaser for the json files and apply the animations doesnt seem worth the effort.
Current situation:
At this moment we converted all sheets to textures. Not using the animations.
Request:
In the past there used to be some plugins to display HTML on a plane (flat surface). But these seem to be outdated. Is there anyone out there who has a solution for this problem?
Thanks in advance.
Although this answer doesn't address the specific request of displaying HTML on a quad (plane, whatever) in Unity, it is a solution that may be worth considering if it fits your scenario.
If the presentations are linear, why not record them as video? You can easily the play the video on a quad in Unity using RenderTexture and pause it at the right moments to wait for the user to trigger the next slide/animation, whereupon the video can be played again until the next stop point.
This will require little programming on your part, but isn't the most flexible solution as it requires a linear slideshow and for you to create pause-points in the video playback at the correct timings to match the points where the slideshow naturally awaits a mouseclick from the user.

Can we use Video as a Game Environment?

I am going to build a FPS video game. When I developing my game, I got this problem in my mind. Each and every video game developer spend very big time and use a lot of effort to make their game's environment more realistic and life-like. So my question is,
Can We Use HD or 4K Real Videos as Our Game's Environment? (As we seen on Google Streetview - but with more quality)
If we can, How we program the game engine?
Thank you very much..!
The simple answer to this is NO.
Of-course, you can extract texture from the video by capturing frames from it but that's it. Once you capture the texture, you still need a way to make a 3D Model/Mesh you can apply the texture to.
Now, there have been many companies working on video to 3D model converter. That technology exist but is more for movie stuff. Even with this technology, the generated 3D models from a video are not accurate and they are not meant to be used in a game because they end up generating a 3D model with many polygons, that will easily choke your Game engine.
Also, doing this in real-time is another story. So you will need to continuously read a frame from the video, extract a texture from the video, generate a mesh with the HQ texture, cleanup/reduce/reconstruct the mesh so that your game engine won't crash or drop many frames. You then have to generate a UV for the mesh so that the extracted image can be applied to the current mesh.
Finally, each one of these are CPU intensive. Doing them all in series,in real-time, will likely make your game unplayable.I also made doing this sound easy but it's not. What you can do with the video is to use it as a reference to model your 3D environment in a 3D application. That's it.

How to "hang on" to two distinct points coming from iPhone's camera input in live stream?

how could I get "hold of" two points coming from the iPhone's camera (in live stream) like these people do it : http://www.robots.ox.ac.uk/~gk/youtube.html (they're using this technique to bypass the need for markers in AR..)
I'm not interested in AR, I'm only interested in coming up with a way to "hang on" to such points coming from the camera's live stream and not lose them regardless of whether I'm moving the camera closer to them or further away, to the left or right,..etc.
Is it just a matter of coming up with a code that scans the camera's input for something that is "standing out" (because of diferrence in color?high contrast, etc?)
Thank you for any helpful ideas or starting points!
Check out http://opencv.willowgarage.com/wiki/
OpenCV is an open source lib that can do lots of things around image recognition and tracking.
If you google for it, along with iOS as a keyword you should run into a few related projects that might help you further.