tutorial using gstreamer for saving photos and videos from web cam - gtk

I know that i have to use gstreamer for creating an application to capture photo and videos
from web cam. But, i could not find any documentation describing vividly the steps for creating the
same. It will be very pleased,if any one could helped me in this.
Regards,
iSight

There is a GNOME/vala tutorial in the work that shows you how to make a Gtk application to grab pictures:
https://developer.gnome.org/gnome-devel-demos/3.10/magic-mirror.vala.html.en
Saving a video is left for you to do, you'll have to use a tee to have both live preview and recording simultaneously.
Also, there is a high-level object, camerabin, for your case, but I never used it myself.
Camerabin is being used by libcheese, that is even higher level and depends on Clutter (but not Gtk), and allows you to plug effects with cheese_camera_set_effect().
Pick what you need!

Related

Personalized Video / Facebook App - What is the best approach?

I want to build a facebook app featuring a personalized video which imports content assets from the user's facebook profile and their extended social graph and integrates these assets within the timeline. I am thinking of using Flash however a key stipulation is that the app works on mobile - and so I would need to use HTML5. My question is: Can I use Flash to build the application and then compile the app as HTML5 - or is there an alternative solution in the form of a HTML5 video toolkit with a programming layer that would allow me to build a web app / access the Facebook API?
I have done this a few times over the years and yes flash was the easiest however there are a few options which you have available to you that I know of which will be purely HTML5 based, personally I'd stay away from flash here as it will end up just getting int he way:
1- The cleanest method is to use a video compositing tool on the server side which can be programmed to accept variables. Personally I have only ever done this using ffmpeg however there a couple of alternatives which are out there.
The basic process would be to grab the media from FB then to composite them at certain point on top/below/around a base video which is sitting on the server using a shell script which you then pass the media assets to as variables. There are so many options as to how you might want this to be done, probably best id to have a look at some of these examples:
http://broadcasterproject.wordpress.com/2010/05/18/how-to-layerremix-videos-with-free-command-line-tools/
http://graphcomp.com/ffmpeg/
ffmpeg watermark without vhook?
note that last time I did this I used vhooks and custom filters, vhooks are now deprecated
This method will mean a reasonably heavy server load if your app is popular but it's probably the most robust across devices etc.
2- Use Popcorn.js, and let the processing be done on the client side. you could hard code it using css/js/html but popcorn is pretty stable although I havent seen how it runs on devices but in theory it should work (all standardized technologies). Basically the process would be to use javascript to fire the display of images overlayed on the video base file at preset cue points. Popcorn has all of the methods and means for you to do this already.
Hope this helps a bit. Good luck, sounds fun.
we realised some interactive video apps and one recent project was quite like your question describes.
We used adobe flash to track the motion - and published the project via create.js. You could have an image sequence from within create.js or put a video in a layer behind. This video would then control the player head time of the create.js motion tracked sequence via jquery.
worked fine - here a link to a testsetup with an image sequence.
Video Integration would be the next step.
http://www.jungeroemer.net/projekte/testpersvid/elftest01.html
(German text, sorry but it's nothing important to read there.
Just click the images and go for it)
you can download the sources from the link, if you need i can also upload the flash file to show you the motion tracking.

How to record game in cocos2d iPhone

I am developing a cocos2d app.
It's almost completed but now I want to record the activities of my app as a video file, including sound produced by the app.
How can I implement this?
Anybody can help me.
Please suggest a way to implement this.
Thanks in advance.
The question isn't new, but since it isn't answered I thought I'd pitch in:
We provide an SDK called "Everyplay" that allows you to do exactly what you're looking for. It's free to use, and is lightweight.
We provide out-of-the-box integrations for Unity3D, cocos2d (1.x, 2.x), cocos2d-x, and you can of course integrate to a custom OpenGL-based game engine.
The documentation is available at https://developers.everyplay.com/doc
The documentation contains an example app key to use when developing, but you can of course sign up for your own client key at https://developers.everyplay.com/
There are many options - and the fact that your app is cocos2d doesn't matter much.
iSimulate works well. You can actually play the app on your device and record the gameplay as well as the touch events. This is important if you want to show user interaction in your app. You run the app in the simulator but you control it from your device.
If you just want to record the app interaction without caring about showing users the touch events, you can use Screenflow or Jing or some other recording software. I used to use Jing (free) but Screenflow works better for me and it also lets you create more advanced video like a trailer with effects. edit You should be able to capture touch events through the simulator with Screenflow too. You can choose to show them or not. And can use different indicators for those events.
Search on google for mac or iphone recording software. There are many options. I had the best experience with Screenflow because I wanted to make a trailer and gameplay video.
I'm developing similar application which allow user record the activity within cocos2d-x activity.
I'm using screen capture method and then combine it using FFMPEG. The performance wasn't too good thought but is the easiest way to achieve.

how to create application for video sharing or live video view between two iphones

I am creating application which is having functionality like 1 person can view video live from another iPhone, i.e. one iphone is recording and and another is viewing the same, as we do with FACE TIME, but this things to be performed by our own server.
I come to know to USE XMPP client, and also we can use google Api , but how to use and what else things are required to create such kind of application ?
Also shall we need to create own server side part or we can hire other servers , like google/gtalk or any other which is already ready.
please guide me what other things are required for the same.
thanks.
I believe that for connecting 2 devices together GStreamer is one of the best choices: it's broadly used and there's a lot of materials/docs on it.
GStreamer has a pipeline architecture that inspired by DirectShow and Quicktime, and it provides a command-line tool named gst-launch that allows you to create a pipeline and quickly test several components of the library together.
This message, shares some interesting info on how to stream video directly from the iPhone camera using gst-launch, while receiving the data on a PC through VLC. Which means, 50% of what you are looking for is already done.
Another option, also demonstrated in that message, is to use FFmpeg.
I'd like to advocate ffmpeg, which has been successfully migrated onto iOS.
What you need to do is:
1. rewrite ffserver, use camera input as the video source, and encode it by H.264/MPEG-4 encoder
2. rewrite ffplay, so that it can display video on iOS devices. The network protocol and video decoder part are ready.

iPhone: Building talking puppet application

I am trying to learn and build talking puppet iPhone application. The great example is "Talking Ben the Dog" and here is youtube video. I have no idea how am I going to build such application. I have a graphics designer who will do their part. As a being programmer, what would I need to be aware of? If someone can throw their ideas or point me some relavant documentation or sample code would be great help.
Thanks.
First, you'll need to create the content. That means the animation scenes and any associated audio. Next, you'll want to trigger those scenes based upon the user's input.
If you want more advanced functionality like "talk back" where the app repeats what you say, then you'll need to get a grip with AudioQueue and AudioUnit APIs. That means detecting levels of incoming audio then triggering writing audio in to stored buffers. These APIs are difficult so this will be the most technically challenging part. You'll need to be comfortable with pointers and other lower level programming concepts.
For an app without talk back, a lot of work will be required to create the content. Then you'll need to re-create the animations using UIImage and the Core Animation framework in your app.
There are a lot of great videos on the Apple site and sample code. This will be a brilliant learning curve for you to get up to speed with Core Animation.
Just make a couple of videos for every scene and play them according to button click!

playing streamed content in an app

I have a number of music tracks which I would like the user to be able to preview a small clip of each.
These tracks are on a server.
How is media streamed into the app and which player is used? Can a custom player be created to play the clips within the view, without e.g. quicktime player opening?
Thanks
If you don't want to use QuickTime, the matter is rather complex, as far as I know. Fortunately, a lot of work already has been done for you by Matt Gallagher. See this excelent post for further information. The code, that he provides works perfectly in my application.