I am using TwilioVideo for video calling. My problem is when I connect to a video room only voice is working. My code and issue as shown below.
It is a compilation error. It looks like the addRenderer method does not exist on the subscribedVideoTrack. I am not familiar with Swift but I think you should add the renderer on the publication object.
Related
Hi i am using Twilio Video sdk to implement a video calling feature inside my app. I implemented the video call successfully and the video is successful. But the voice is not transmitting between the two users.
I added MicroPhone Usage Key too in the info.plist file but that did not solve the problem. I tried with and without the microphones and headset but no voice.
I see that the addedVideoTrack functions is being called and the statements are printed inside that function. But the function addedAudioTrack is not being called at all.
Can some body provide a solution for this problem or point me in the right direction.
I am using the code from Quicstart example provided by Twilio.
Here is the link to the tutorial i am referring to.
I commented out a line and it was my mistake. I actually commented a line that prepared audio to work. I just found out and it worked. But now the video and audio are fully functional.
My question is if there is any built-in interpretation of metadata by the video player in iOS. I know one can add meta-data to a video and interpret it within a custom application as shown here.
In iOS on ipod or iphone, an HTML video is opened within the native player. I would like to display a message above or below the video for a short duration at the beginning. Since I cannot control the native player I thought there might be some built in metadata interpretation that could be used to perform this. I have not been able to find any information on this.
Any help is appreciated.
The blog you've posted includes details on using the native player MPMoviePlayerController to display meta data, which is pretty cool actually. You learn something new every day! If you're making a Phonegap App I suppose you could write a plugin to do this?
Or alternatively, have a look at this other OS question which appears to suggest that it is possible - though not seemingly with metadata embedded in the actual video. Apparently this works on iOS.
Reading metadata from the <track> of an HTML5 <video> using Captionator
I created an app that plays the song and calculates the decibels of the audio that is being played. Its fine.
But I want to make a change in it. That is to receive the sound/audio from outside (when user speaks) and calculate the decibels.
I don't want to record anything. Just receive audio/sound and calculate the decibels?
Any hints or tutorials please?
You could try using the source code for one of the sample apps (SpeakHere) in the iOS Developer Library as a starting point: http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html
I found the src code "https://github.com/jkells/sc_listener_sample" which is working without any modifications.
I have a question about video stream processing. Is it possible to get access and modify real time video stream during recording (f.e. I want to add some text to video)? I can do this as a preview by getting separate frames, but I'm looking for tool which will allow me to store video with my text in video frames.
Probably there is already some libraries/tools available (but I haven't found any yet).
Try GPUIMAGE library. It can help you.
You should check AVCam sample code by apple. That might be a starting point.
I am trying to get the duration of a video taken with the camera using UIImagePickerController on the iphone, has anyone found a solution to this?
Thanks
Daniel
You can now do this using AVFoundation you can make your movie into an AVAsset and then check the duration property
Oh, why, hello hopelessly obsolete answer. I'm afraid you're only left here as historical evidence that yes, before iOS 4 if you wanted to do anything remotely interesting on a recorded video (besides playing it) you had to implement the processing yourself.
I don't know of any framework function to do so, so I'm afraid you'll have to parse the video container yourself (which by the way, is QuickTime/.mov) to extract this info. It's not like it's not documented. Luckily since the provider is known, you can trust all info to be truthful, which you can't assume of random videos found on the web.