I'm trying to use the vision.VideoPlayer in a custom created GUIDE GUI. The video source is a camera. Right now I can get it to work with the camera but the vision.VideoPlayer object pops out of my gui. I've read the example given but it seems that this doesn't use the videoplayer rather than the videoreader object to read a video file and project the frames in a gui.
Is there any way to embed the vision.VideoPlayer in my GUI using input from a camera?
I just uploaded this to the FEX:
http://www.mathworks.com/matlabcentral/fileexchange/53600-fancyflowplayer
It uses no high level dependencies, only the "VideoReader" interface (which btw the vision.VideoPlayer also uses).
It is by no means a competitor to VLC(this IS Matlab we are talking about), but it is all open source and shows how you can access and process video in real-time. It also has a working draggable seekbar, extra keyboard and mouse interfaces that are really handy, which the vision.VideoPlayer does not have.
cheers,
Stefan
Unfortunately there is no way to use vision.VideoPlayer in a custom GUI directly. However here an example of how to play a video inside a custom GUI without using vision.VideoPlayer.
Related
I'm wondering if there's a way to recreate the "Object" experience when viewing a .usdz file through Apple's AR Quick Look. I want an experience that showcases a 3D object without "augmenting reality".
Some options that I'm thinking of that might be able to recreate this feature:
1) Using ARKit, disabling the camera and setting my own background with a custom image. I would then set the usdz/object in the center of the device's screen while having all the interaction functionalityfor the 3D object.
2) Web AR - recreate this 3D experience elsewhere and showcase this on a webview.
Any guidance or discussion about this is much appreciated - thank you!
You can use Google's model-viewer if you are going with the web solution. Another easy and effective solution would be echoAR (full disclosure, this is where I work). You can simply upload your models there and then get a link to thier model-view. You can upload models in different formats (obj, fbx, glTF, glb, USDZ) and it'll automatically convert it to the format you need to view on any device.
I have tried to capture a screen video using this sample code
How to capture screen activity to a movie file using AV Foundation
It's working fine but I am wondering how can I capture a specified window of some individual app (not screen area specified by CGRect).
I'm asking because Google Hangouts can share specified window even if it isn't visible.
So, my questions are:
How can I modify the code above to achieve this?
Is it possible to capture few windows at the same time?
I'm not sure it is possible to capture video of background window using AVFoundation, unless you make your own concrete subclass of AVCaptureInput. But taking screenshot of background window can be implemented using CGWindowListCreateImage() function from Core Graphics framework. Apple's SonOfGrab sample code may be helpful.
I am creating a video player app for android. for that i need to create thumbnails for the videos present in the videos folder.
After searching web i could able to understand unity's MovieTexture doesn't support for android. This one i could able to solve using a plugin.
For creating thumbnails i planned to create a canvas, and load GUI objects at runtime from prefab. create "GUI Raw Image" with images that would be the thumbnail image representing the video.
My hard luck, i came to understand GUI Raw Image is non trigger object. So changed to use GUI buttons instead of Raw Images.
But the issue, i am not able to attach image to my button prefab.
Can anyone help me.
Thanks in advance.
When using the new UI objects, as opposed to the Legacy GUI, you must have the image imported as Sprite/UI.
Unity Import Reference
Actually I need a region over camera overlay like mostly of the QR code scanner app.
And when a square box comes within it just focus and click picture from it. Any idea how to implement it. I was using the UIIMAGEPICKER class but after doing some googling I found that I need to use the AVFoundation framework. But unfortunately I am not the near one.
Any code or any tutorial will be helpful. Please let me know how can I implement this.
One more thing if i need to take picture can i make the picture only to the region size?
Yes, you are correct. You will need to use AV Foundation to implement this. Have a look at the 'Using the Camera with AV Foundation' video from the WWDC 2010 session videos, to get an overview of the framework.
AvFoundation has no Dependancies on UIKit. So you will get some nice performance increases over using UIImagePickerController. It will also give you Full Access to the camera.
When using AV Foundation you are in control of the 'Device Capture Settings' i.e. Flash as well as Focus Mode and Exposure; including their points of interest. Have a look at the Programming Guide to see how to use these, or the device behaviour may differ from what you expected.
You can also download and example of an application that uses AV Foundation to implement the camera here.
Once you're up and running with that, have a look at this tutorial to get started with the overlay on the camera.
One more thing if i need to take picture can i make the picture only to the region size?
Yes you will be able to implement this. You can also configure the AVFoundation session itself to output the lowest practical resolution.
.. in order to use in augmented reality application on iphone?
I have an app working like that; camera detects the marker and then it places an object related to that marker. However the object is not animated. It stands there. Of course i can move the object programmatically but i don't want to do that. What i want is the object has animation itself. I searched but i can't find exact file format. There are .obj files (not animated itself, is it true?), .mtl files, .anm files .etc. If the format is one of them, then can you give me an example model?
Thanks
Definitely MD2 is your best choice. I have integrated jPCT-AE and Vuforia and it works like a charm. AFAIK MD2 is best for animated models mainly because it can store the key-frame animation in itself and you can have any kind of animation with it at almost no cost.
Here is a test video:
http://youtu.be/chsHh0pEhzw
If you have more question do not hesitate ;)
You should specify what AR SDK/platform you are using to create the iPhone application you are talking about. That being said, for many of the common AR SDK available the MD2 format is often used to display animated models (either with a built in render engine or with example code that shows how to use the MD2 format with that SDK):
http://en.wikipedia.org/wiki/MD2_(file_format)
http://www.junaio.com/develop/docs/documenation/general/3dmodels/
OBJ (Wavefront) files do not support animation.