Catch audio stream in freeswitch - sip

Given a sip call between two persons using freeswitch as my telephony engine ,how to catch audio stream of each person separately and process it before it's sent to the other end. Thanks for your help in advance.

The only possible way i can think of is, Set up two conference. Originate a call to A and connect to Conf A on answer. call B and connect to Conf B.
Now if A speaks you can record the call and convert to text - translate and convert to audio and play it to Conference B. Vice versa.
ESL is a powerful module in Freeswitch where you can able to get all the events of freeswitch application and play with. In conference you get events when a member speaks, Joins, leaves, Mute and so on. Its an Idea but i've not tried it.
Its like http://www.iamili.com/ that you gonna try :)

Related

XMPP for multiplayer feature - design question

There is a game that currently is played on standalone computers.
I want to create an add-on that allows the players to interconnect. For that I think XMPP seems to be a suitable platform.
The messages that shall be exchanged is presence/roster so users can find each other, structures messages to send items or money and generic text messages for comment and fun. In later versions I'd like to experiment with some 'business logic' to send out global changes for the world or missions and such.
My question is how users get hooked up to each other. Imagine someone creates an XMPP account. How does he start meeting the others?
Or, in general how would the users see each other if they have independent accounts? Should they all join one first multi-user-chat? Should there be one monitoring component to send invites and update rosters?
If, inside the game players can enter different areas, would it make sense to have one multi-user-chat per game area?
I know these are many questions but maybe from them you get the design problem I am facing, and I'd be happy to get some clues how this could get implemented.
Meanwhile I found the answer.
The game acts as XMPP client. It sill connect automatically to a multi-user-chat that is hardcoded in the game. With the correct parameters given, the XMPP server will create the chat room on the first user to connect. Subsequent users will simply join the same room.
This given, every user will automatically receive presence messages for all users in that room. From this the client knows the other player's addresses and can send messages to specific players. Messages addressed to the room will automatically be relayed to all other users.
So the problem I saw above is actually very easy to solve within XMPP.

change stream name on runtime on wowza

I have one question regarding sending stream to TV using wowza.
I need to send multiple streams running at same time to TV station with using one link
Basically question is that, I have multiple streams with different name and when i need to send to TV it convert to one unique name on run-time.
Is this possible ? if yes please explain bit more..
thanks in advance
By sending to "TV" you mean leveraging Push Publish to send to an external CDN or Wowza Server, then you can specify the outbound stream name within the Push Publish mapping by setting the "streamName" parameter. You could also remap the inbound published stream name via the approach found here.
Otherwise, if you are referring to requests made for a particular stream on your given Wowza Instance (vs pushing outbound), then you could leverage the Stream Name Alias module of which you could map any stream name to another.
Thanks,
Matt

What application layer protocol does Google Goggles and Layar use?

These applications stream video from client app to their own server. I am interested in knowing what type of protocol they use? I am planning on building a similar application but I dont know how to go about the video streaming. Once I get the stream to my server I will use OpenCV to do some processing and return the result to the client.
I recommend you to send only a minimum of data and do the processing as much as possible on the client. Since sending the whole video stream is a huge waste of traffic (and can not be done in realtime I think)
I would use a TCP connection to send an intermediate result to the server, that the server can process further. The desing of that communication depends on what you are sending and what you want to do with it.
You can wrap it in xml for instance, or serialize an object and so on.

How to connect to NAO robot using sockets?

I'm playing with Aldebaran's NAO humanoid robot Simulator and choregraph.
I have a software in java that I would like to use to control the robot by activating its behaviors, and I believe sockets would do the trick.
My question is: is there a way to open a socket connection from within choregraph+naoSim, so I can get sensor readings and send commands to the robot?
Or any other way to connect to choregraph+naoSim to achieve the same effect?
thanks in advance!
K
I'm planning to use python websocket package to accomplish this. As far as I see the server can be written on anything. The client part - NAO - should initiate connection to the server send something, possibly wait for a reply and then carry on. So the sending functionality can be implemented using Python and coded in one of the NAO action boxes. You could even create a separate box that will take a request as a parameter and output a reply from the server. A small neat box that talk to the server.

On-demand video streaming

I'm currently researching different streaming methods both for live and on-demand streaming.
I've read about both multicast and unicast, and now I got the following question, which I can not find an answer to.
"Is it possible to make on-demand streaming with multicast?"
The way I understand it is, that when using multicast, the media server creates a stream of the video, which only is played once, which users can connect to and watch.
It it because multicast only allows live streaming? If not can someone please explain to me how it works?
"Is it possible to make on-demand streaming with multicast?"
Technically, yes. Practically, no.
The way I understand it is, that when using multicast, the media server creates a stream of the video, which only is played once, which users can connect to and watch.
You understand it correctly. And that is that.
Well, you can do it, but the bigger question is why would you want it?
On-demand suggests that you start the broadcast at the time that a single viewer wants to see that particular piece of content. If a single user chooses the content and the time it is started, why would you want to multicast it?
Yes, it can be done, but there are caveats. If you take a flight on an old plane you may see an old entertainment system that offers say 20 channels with a movie on each. The channels are all rolling and once the programmes have finished they restart. This is better than having just one channel broadcast on a projector as it gives the user choice of what to watch but doesn't give them the freedom of when to watch.
Modern flight entertainment systems are all on-demand, every passenger can watch any film at any time. So how can multicast help there is the question? If you detect that multiple users are watching the same film, and the caveat being at the same time, you can replace the streams to each user with just one multicast channel. Which is technically savvy but you have to ask why would you do this? This only makes sense if the communication medium is feeliable or insufficient to serve every user simultaneously.
Designing a flight entertainment system that does not scale to every passenger actually using it is a bit short sighted. Therefore the system can handle the worst case of a stream for each user, meaning there is no benefit for multicasting anything.
Some cable/satellite networks implement multicast streaming and use time windows to group as many viewers together as possible. For example wait up to 5 minutes to watch a video whilst displaying the infamous phrase "buffering".