parallel SIP transactions - sip

Is it possible to perform many SIP transactions in parallel, for a UA with two other UAs? Fpor example, if UA1 is in the middle of an INVITE, can UA1 respond to an incoming INVITE from UA3? What about standalone transactions?

There's nothing in the standard that prevents a SIP device from handling multiple concurrent transactions and in fact SIP servers need to do so in order to be able to handle any kind of load.
As to how a SIP user agent should handle concurrent SIP transactions that's a separate consideration. IF UA1 is already on a call and a new INVITE request comes in from UA3 the typical way to handle it is with some kind of call waiting indication. With a softphone that indication can be visual whereas with an ATA that indication is often on the audio channel by injecting some tones into the UA's audio stream.
For non-INVITE transactions it will generally be a lot simpler since most don't require any user action. For example the UA could maintain half a dozen different registrations with different SIP servers and the various register and/or subscribe transactions (in this case the transaction is simply the combination of the request and response) could be running concurrently.

There's another SIP parallel transactions "gothca-to-watch-for" too...
Within a sip dialog, if there are multiple UAC transactions started within a short space-of-time (~0.5s) and your transport in unreliable (UDP), there is a possible problem if the initial request packet is lost.
Lost packet with sequence number (CSeq) 'n' doesn't arrive, but the next packet does, containing CSeq n+1.
This is acceptable at the receiving (UAS) side, and it updates its knowledge of the "remote cseq" to 'n+1'.
The initial request is then resent, but CSeq 'n' is now lower then the remote-cseq, so MUST be discarded and the UAS responds with a (500 Server internal error).
Probably not what was expected!
So if your transport is "unreliable", you need to consider serialising requests with a dialog.

Related

Does it make sense to use RTP protocol for multiple streamers and single receiver?

I am in a process of learning and trying to use the RTP/RTCP protocol. My situation is that there is 1 to n streamers and 1 (or potentially 1 to m if needed) receiver(s), but in a way that the streamers themselves do not know about each other (they cannot directly due to technical reasons, such as different network, limited bandwidth, etc...). So it is more like multiple unicast sessions, but the receiver actually knows about them all, collects data from all of them, it is just the senders do not know about each other .
Now reading about the protocol, it seems to me that huge portion of it is related to sending some feedback, collision detections, and so on. So I have doubts, is RTP is really applicable in this case? Is is already used in this way somewhere?
Seems to me it is still beneficial to collect statistic about data transfer that RTP provides (data sent, loss, times, etc...), it just feels the most of the protocol is sort of left out...
Also I have one additional question, going through the various RTP libraries, they all assume that sender will also open ports for receiving RTP/RTCP data, does RTP forbid use of one way communication? I mean application that would only stream the data, not expecting to receive anything back. The libraries (e.g. ccRTP) seem to assume both way communication only...
RTCP is the protocol that provides statistics. The stream receiver (client) will send stats to the sender (server) via RTCP. I don't believe the client will get any statistic reports from the server.
There's nothing wrong with a single client receiving multiple unicast sessions from various servers.
RTP requires two way communication during the setup process. Once setup is complete and the play cmd is sent, it is mostly one way. The exception are the "keep alive" packets that must be sent to the server periodically (usually every 60 seconds or so) to keep the stream going. The exact timeout value is sent to the client during the setup process.
But if you implement your own RTP, there's nothing stopping you from having the server send the stream continuously without any feedback from the client. Basically it would be implementing an infinite timeout value.
You can read about all the details in the spec: RTP: A Transport Protocol for Real-Time Applications

How to prevent sending same data to different clients in REST api GET?

I have 15 worker clients and one master connected through internet. Job & data are been passed through REST api in json format.
Jobs are not restricted to any particular client. Any worker can query for the available job in regular interval(say 30 seconds), process it and will update the status.
In this scenario, how can I prevent same records been sent to different clients while GET request.
Followings are my solution approach to overcome this issue:
Take top 5 unprocessed records from the database and make it as SENT and expose via REST GET.
But the problem is, it creates inconsistency. Some times, the client doesn't got data due to network connectivity issue. But in server, it will be marked as SENT. So, no other clients can get that data. It will remain as SENT forever.
Get the list from server, and reply back the list of job IDs to Server as received. But in-between this time gap, some other clients also getting same set of Jobs.
You've stumbled upon a fundamental problem in distributed systems: there is no way to know if the other side received your message. You can certainly improve the situation with TCP and ack messages. But if you never get the ACK did the message never arrive, did it arrive but the recipient die before processesing, or did the recipient send he ACK and the ACK get dropped?
That means you need to design your system to handle receiving data more than once.
You offer two partial solutions; if you combine them, your solution starts to look like how SQS works. Mark the item as pending_ack with a timestamp. After client replies, it is marked sent. Any pending_ackss past a certain time period are eligible to be resent.
Pick your time period to allow for slow network and slow clients and it boils down to only sending duplicates when you really don't know if the client died or not.
Maybe you should reconsider the approach to blocking resources. REST architecture - by definition is not obliged to save information about client. Instead, you may want to consider optimistic concurrency control (http://en.wikipedia.org/wiki/Optimistic_concurrency_control).

How to figure out when SIP call is started

I'm writing simple SIP-proxy application which stands between Astreisk and SIP client (any softphone). The purpose of the application is to calculate the duration of the call.
Below is example of regular flow:
Client sends INVITE to SIP-Proxy, SIP-Proxy resends INVITE to Asterisk
Asterisk answers with 200 OK, SIP-Proxy resends 200 OK to client.
Client answers with ACK, SIP-Proxy resends ACK to Asterisk
Whenever one of the parties sends BYE, conversation should be finished.
On step 2 I assumes that call is started (e.g. rtp media flow is started). Then I wait for BYE message to calculate duration of the call. However I noticed that some clients never goes to step 3 and 4. No call end notification received from any parties after step 2. And duration of such call is infinitely.
What is the best way to find out start/stop time of the SIP call without sniffing RTP flow ? Should I wait for step 3 to mark the start of the call ? What if client omit ACK or what if UDP datagram with ACK is missed in the network ?
For now I used to think there is no reliable way to figure out that SIP call is started. Maybe I should use Astrisk channels API instead to track active calls.
Another option is to generate a RE-INVITE to test the existence of the Session. Dont have to negotiate a timer or anything. Just using re-invite could help. Reuse the SDP to ensure that no media changes happen. But then your application is moving out of being a Proxy in the path of the call to being a application server.
Also the duration can only be capped to nearest interval for this re-invite request and not necessarily the exact time the call was released.
Your problem seems to be at SIP level, because your proxy does not add itself into the message path using a Route header. This process is called Record Routing. If doing so, all subsequent requests in your dialog will also traverse it (ACKs and BYEs included).
You should not reinvent the wheel by writing a SIP proxy. For example, you could use a open-source, flexible, powerful and completely customizable SIP Proxy in order to build any possible scenario you could think of: OpenSIPS!

What are the required mechanisms for a reliable layer over UDP?

I've been working on writing my own networking engine for my own game development side projects. This requires the options of having unreliable, reliable, and ordered reliable messages. I have not, however, been able to identify all of the mechanisms necessary for reliable and ordered reliable protocols.
What are the required mechanisms for a reliable layer over UDP? Additional details are appreciated.
So far, I gather that these are requirements:
Acknowledge received messages with a sequence number.
Resend unacknowledged messages after a retransmission time expires.
Track round trip times for each destination in order to calculate an appropriate retransmission time.
Identify and remove duplicate packets.
Handle overflowing sequence numbers looping around.
This has influenced my architecture to have reliable message headers with sequences and timestamps, acknowledge messages that echo a received sequence and timestamp, a system for tracking appropriate retransmission times based on address, and a thread that a) receives messages and queues them for user receipt, b) acknowledges reliable messages, and c) retransmits unacknowledged messages with expired retransmission timers.
NOTE:
Reliable UDP is not the same as TCP. Even ordered reliable UDP is not the same as TCP. I am not secretly unaware that I really want TCP. Also, before someone plays the semantics games, yes... reliable UDP is an "oxymoron". This is a layer over UDP that makes for reliable delivery.
You might like to take a look at the answers to this question: What do you use when you need reliable UDP?
I'd add 'flow control' to your list. You want to be able to control the amount of data you're sending on a particular link depending on the round trip time's you're getting or you'll flood the link and just be throwing datagrams away.
Note that depending on the overall protocol, it might be possible to dispense with retransmission timers. See, for example, the Quake 3 network protocol.
In Q3 reliable packets are simply sent until an ack is seen.
Why are you trying to re-invent TCP? It provides all of the features you originally stated, and has been show to work well.
EDIT - Since your comments show that you have additional requirements not originally stated, you should consider whether a hybrid model using multiple sockets would be better than trying to fulfill all of those criteria in a single application-layer protocol.
Actually it seems that what you really need is SCTP.
SCTP supports:
message based (rather than byte stream) transmissions
multiple streams over a single netsock socket
ordered or unordered receipt of packets
... message ordering is optional in SCTP; a receiving application may choose to process messages in the order they are received instead of the order they were sent

What is Microsoft Message Queuing (MSMQ)? How does it work?

I need to work with MSMQ (Microsoft Message Queuing). What is it, what is it for, how does it work? How is it different from web services?
With all due respect to #Juan's answer, both are ways of exchanging data between two disconnected processes, i.e. interprocess communication channels (IPC). Message queues are asynchronous, while webservices are synchronous. They use different protocols and back-end services to do this so they are completely different in implementation, but similar in purpose.
You would want to use message queues when there is a possibility that the other communicating process may not be available, yet you still want to have the message sent at the time of the client's choosing. Delivery will occur the when process on the other end wakes up and receives notification of the message's arrival.
As its name states, it's just a queue manager.
You can Send objects (serialized) to the queue where they will stay until you Receive them.
It's normally used to send messages or objects between applications in a decoupled way
It has nothing to do with webservices, they are two different things
Info on MSMQ:
https://msdn.microsoft.com/en-us/library/ms711472(v=vs.85).aspx
Info on WebServices:
http://msdn.microsoft.com/en-us/library/ms972326.aspx
Transactional Queue Management 101
A transactional queue is a middleware system that asynchronously routes messages of one sort of another between hosts that may or may not be connected at any given time. This means that it must also be capable of persisting the message somewhere. Examples of such systems are MSMQ and IBM MQ
A Transactional Queue can also participate in a distributed transaction, and a rollback can trigger the disposal of messages. This means that a message is guaranteed to be delivered with at-most-once semantics or guaranteed delivery if not rolled back. The message won't be delivered if:
Host A posts the message but Host B
is not connected
Something (possibly but not
necessarily initiated from Host A)
rolls back the transaction
B connects after the transaction is
rolled back
In this case B will never be aware the message even existed unless informed through some other medium. If the transaction was rolled back, this probably doesn't matter. If B connects and collects the message before the transaction is rolled back, the rollback will also reverse the effects of the message on B.
Note that A can post the message to the queue with the guarantee of at-most-once delivery. If the transaction is committed Host A can assume that the message has been delivered by the reliable transport medium. If the transaction is rolled back, Host A can assume that any effects of the message have been reversed.
Web Services
A web service is remote procedure call or other service (e.g. RESTFul API's) published by a (typically) HTTP Server. It is a synchronous request/response protocol and has no guarantee of delivery built into the protocol. It is up to the client to validate that the service has been correctly run. Typically this will be through a reply to the request or timeout of the call.
In the latter case, web services do not guarantee at-most-once semantics. The server can complete the service and fail to deliver a response (possibly through something outside the server going wrong). The application must be able to deal with this situation.
IIRC, RESTFul services should be idempotent (the same state is achieved after any number of invocations of the same service), which is a strategy for dealing with this lack of guaranteed notification of success/failure in web service architectures. The idea is that conceptually one writes state rather than invoking a service, so one can write any number of times. This means that a lack of feedback about success can be tolerated by the application as it can re-try the posting until it gets a 'success' message from the server.
Note that you can use Windows Communication Foundation (WCF) as an abstraction layer above MSMQ. This gives you the feel of working with a service - with only one-way operations.
For more information, see:
http://msdn.microsoft.com/en-us/library/ms789048.aspx
Actually there is no relation between MSMQ and WebService.
Using MSMQ for interprocess communication (you can use also sockets, windows messaging, mapped memory).
it is a windows service that responsible for keeping messages till someone dequeue them.
you can say it is more reliable than sockets as messages are stored on a harddisk but it is slower than other IPC techniques.
You can use MSMQ in dotnet with small lines of code, Just Declare your MessageQueue object and call Receive and Send methods.
The Message itself can be normal string or binary data.
As everyone has explained MSMQ is used as a queue for messages. Messages can be wrapper for actual data, object and anything that you can serialize and send across the wire. MSMQ has it's own limitations. MSMQ 1.0 and MSMQ 2.0 had a 4MB message limit. This restriction was lifted off with MSMQ 3.0. Message oriented Middleware (MOM) is a concept that heavily depends on Messaging. Enterprise Service Bus foundation is built on Messaging. All these new technologies, depend on Messaging for asynchronous data delivery with reliability.
MSMQ stands for Microsoft Messaging Queue.
It is simply a queue that stores messages formatted so that it can pass to DB (may on same machine or on Server). There are different types of queues over there which categorizes the messages among themselves.
If there is some problem/error inside message or invalid message is passed, it automatically goes to Dead queue which denotes that it is not to be processed further. But before passing a message to dead queue it will retry until a max count and till it is not processed. Then it will be sent to the Dead queue.
It is generally used for sending log message from client machine to server or DB so that if there is any issue happens on client machine then developer or support team can go through log to solve problem.
MSMQ is also a service provided by Microsoft to Get records of Log files.
You get Better Idea from this blog http://msdn.microsoft.com/en-us/library/ms711472(v=vs.85).aspx.