How to record video when using face detection api - android-camera

I'm using the android FaceDetector to detect faces. I base on code
https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker
I want to save the video preview. I have not found a solution , are looking forward to get the best guidance , or a link. Thank all

Related

tensorflow speaker recognition

Can someone please tell me if it is possible to make speaker recognition using tensorflow? I am extracting MFCC data from audio file using librosa and by that I want to recognize speaker. Any suggestions or links on how to implement that?
Check this repo -> https://github.com/pannous/tensorflow-speech-recognition
You can view the speaker_classifier_tflearn.py and get a feeling on how it works.

How to record and play a captured video using AVAssertWriter and AVAssertReader in iphone sdk

I am displaying the camera data (video) on the preview layer.Now I want to record the video and store it in a local file and access it to play on the screen.
I had seen some websites that it is possible with AVAssertWriter and AVAssertReader. Its very difficult for me to understand. Can any one advice me in a clearcut manner or with any sample code.
Anyone's help will be much appreciated.
Thanks to all,
Monish.

IPhone: Video API: Live video streaming modify

I have a question about video stream processing. Is it possible to get access and modify real time video stream during recording (f.e. I want to add some text to video)? I can do this as a preview by getting separate frames, but I'm looking for tool which will allow me to store video with my text in video frames.
Probably there is already some libraries/tools available (but I haven't found any yet).
Try GPUIMAGE library. It can help you.
You should check AVCam sample code by apple. That might be a starting point.

Realtime Video Processing on iPhone

We have a requirement to 'read' an LED Pulse lamp using the iPhone video camera. The LED lamp emits the light based on some load conditions.
Is there any related iPhone API to help achieve this goal?
Thanks much.
You can use the AVFoundation framework to read and process the live video stream from the camera. WWDC 2010 Session 405 gives you a good overview of AVFoundation.
There are iOS AV APIs to get raw pixel bitmaps from the video camera(s). Detecting any specific image or brightness within these raw bitmaps has to be done in your own code.

about upload iPhone video to Youtube

how to set the video format of the iPhone camera captured? I think that can be mov,mp4, h.2x , because iPhone can play it, but can not find API to set the video format to recording .....
also want upload the video to youtube, anyone know where can find a good open source code for this purpose, I think that this feature will be commonly and it must be done by other coder....just want save time for this purpose ..
thanks for your help
use gdata is enough
Using gdata is the way to go as Robin mentioned above. Here is a great article that outlines what you need to do and provides a great set of source code to get you started.
Hope this helps!!