Does anybody can provide some tip about how to use Jingle on XMPP - precisely on ASmack as I develop for Android. I saw source code of ASmack and there is lib for JSTUN so it's implemented.
Any code/tip/link would be appreciated.
Smack's Jingle code is pretty old and lacks a current maintainer. I can't say if it's working right now or how far it has been diverged from the specification.
There is "Java-Bells: A Jingle implementation for Java based on LibJitsi, Ice4J and Smack", but I can't comment on Java-Bells fitness for Android.
I know that there are many people out there looking for a out-of-the-box working solution for Jingle on Android. But AFAIK, at the time of writing, there is none.
Here you is the implementation of WebRTC + Jingle which is support Android and iOS also
Webrtc audio + jingle protocol brought to IOS and Android
Related
I have created the browser based chat client which uses strophe JavaScript lib to connect to openfire server.
Now i need support for VOIP and video stream feature in that application.
I have checked the strophe website for specific plugin for above features, they have provided jingle plugin/extension but doesn't have any documentation or examples.
I have tried to build the library as per specification provided on XMPP , but its taking much time.
So if some one have any documentation or working example then it will help me to develop the feature.
Or any other extension which is created on the top of strophe which provide the jingle support it will be helpful.
Thanks in advance for any suggestions or direction.
Regards,
Kamlesh
I'm in a similar situation. I've searched over the internet and I found strophe.jingle. I haven't tested it yet but it seems simple and nice. It uses WebRTC protocol for video and audio support. https://github.com/ESTOS/strophe.jingle
Is there some APIs to establish video calling between iPhones using my own app? I know such projects as iDoubs, but I am searching for another examples. Or just examples of catching the stream from the camera in a realtime
There is no supported API in the SDK for video calling. You will either need to use a third-party library or write your own (there are some standard protocols for this sort of thing you could use as a reference).
Restcomm is another popular open source alternative.
https://github.com/RestComm/restcomm-ios-sdk
Restcomm also includes SDK for android, server side call control and many other RTC modules.
Can anyone please tell me the way to do video chat in iphone?
I tried to search it on many website but in vain.
I found this link:
http://code.google.com/p/xmppframework/wiki/iPhone
I am not sure if this works for the video chat too?
Thanks,
Naveed
I don't have experience with XMPP, but I think you would have to add your own video solution on top of it. This is definitely not a straightforward task to accomplish - but this might be useful for you:
1) http://code.google.com/p/idoubs/ - open-source 3GPP IMS client for iOS
2) http://labs.adobe.com/technologies/cirrus/ - RTMFP protocol - works on iOS when compiled with Adobe AIR 2.6
Best,
-Gabriel
I've started looking on the subject of Acoustic Fingerprint (http://en.wikipedia.org/wiki/Acoustic_fingerprint) for a pet project of mine for the iOS and I was wondering if there are:
Any opensource libraries or source code for the iOS that handle this?
Assuming I'm a veteran jack of all trades coder, is it very problematic to implement this myself if there is no open-source versions?
Will the Accelerate DSP library in iOS able to handle such a task?
Thanks
you may want to check out the EchoPrint CodeGen library by The Echo Nest. They even have a fully functional iOS code example.
You can find some additional links to open source audio fingerprinting related software in this MusicBrainz article, but AFAIK the EchoPrint library is the only one that has a license that is compatible with iOS apps.
Good Luck!
Not of my knowledge
No problem for a veteran, that won't be easy, but achievable.
Never looked into.
Even in java, this might be an interesting reading.
Before doing anything, especially if you intend to sell on AppStore, take care that these techniques/algorithms are patented. Read what happened to the above blog post writer.
Will the Accelerate DSP library in iOS able to handle such a task?
NO
I also notice that you put the tag "voice recognition". Just to make sure voice recognition as nothing to do with audio identification/acoustic fingerprinting !!
I have a video stream that I used in an iPhone application. I'm now working to port the application to Android so I want to use the same stream.
As Apple requiered, I created a HTTP Live Streaming (media segmenter, m3u8 file, etc.). You can find the stream here: http://envue.insa-lyon.fr/smartphone/aloun_stream/prog_index.m3u8 .
I want to use this same stream on Android. Did someone have the same a resembling experience?
Honeycomb/Android 3.0 has limited support for HLS. Anything before that does not have built in support, but there are supposed to be third party SDKs that will do it, but searching shows a lot of people that can't ge a hold of the third party dev.
Check the Android dev docs to find out what is not supported.
I've given up on the m3u8 stream. I just used mp4-s with android-streaming capabilities.
you have to use webscoket to continuously get TS files as Apple defines, and send to a player to decode the H.264+AAC within TS packet
Check android 4.0 - it claims to support HTTP Live Streaming 3.0 fully, including HTTPS. For older versions I've seen some people reommening it,but haven't tried myself