Stop SIP SUBSCRIBEs from endpoint - sip

I am working with SIP protocol, and I have an issue with an endpoint (ZTE). The endpoint sends a lot of SUBSCRIBE method messages and I want to stop that because i don't whant to implement it.
I already tried to respond with 403 (Method Forbibben) and 405 (Method Not Allowed), but the endpoint is still sending SUBSCRIBE.
What is the propper way to stop an endpoint to send that method?
Thanks!

Sending a 405 (Method not allowed) response is correct. You should also add an Allow header with the methods you DO support.
Omit the SUBSCRIBE method from the Allow headers in (all) your requests and responses to the endpoint. This indicates that you don't support the SUBSCRIBE method.
Of course, you can't control whether the endpoint complies. If it's poorly implemented it could still send SUBSCRIBEs despite you indicating not to send them.

A well-behaved UAC should inspect the Allow-Event header in any response and only subscribe to those event packages listed in the header. If there is no Allow-Event header, or an empty value, the assumption should be that the UAS does not support any event package.
Try to include an empty Allow-Event header in the preceding response.

Related

HTTP Post under the hood

We have 2 Windows services (same machine) that communicate on top of HTTP Protocol.
On specific machine we see the HTTP POST being sent from the client (Windows service) and arrives to the server (Windows service listening to REST CALLs) - 2 times, meaning i get 2 exact HTTP Post request on the service, but we see on client it was executed only 1 time.
Before going to wireshark/analyze the HTTP protocol, I wish to understand what explain this behavior.
When going to https://www.rfc-editor.org/rfc/rfc7231#section-4.3.3
"the origin server SHOULD send a 201 (Created) response containing a Location header
field that provides an identifier for the primary resource created"
I guess we should look in wireshark for 201 response? And if no response? Does the HTTP or network framework for my C# application is retrying the POST on the server side? because we dont see 2 requests sent from client code.
POST reply behavior
While true, more often than not the server replies with a 200-ok status code and some extra information.
Whether this is by mistake or to avoid chatty apis or some other architecture/design consideration, only the developer can tell.
So in theory you get a 201 with an identifier and then make a GET request with said identifier to retrieve details.
In practice a lot of times this does not occur. So it is not safe to assume this behavior.
Your problem
I highly doubt that there is a built in mechanism that retries post. There are plenty of reasons for that:
Duplicating entries. Imagine creating a PayPal payment. If the network has an error and you just did not receive the answer, the built in mechanism will charge you twice.
There are libraries that do that only when you are sure that the request is idempotent, that is the post contained some sort of identifier and the second request will fail.
First, the calls are HTTP GET (not POST).
We define the URL with hostname/FQDN, the solution to avoid duplicated calls was to work with ip address instead of hostname when sending the Rest API.
This is the long explanation of the problem, no root cause yet.
Used both Wireshark/Process Monitor to diag, not sure for the root cause.
Process Monitor: Filtering to display network calls
Wireshark: Filter to show only HTTP
The Client send a single HTTP Get request to:
/DLEManagement/API/Engine/RunLearningPeriod
The call was executed at 11:08:16.931906
We can see 2nd call at 11:08:54.511909 - We did not trigger.
HTTP Get executed from *Server.exe (in red) and the Server is at *Management.Webservice.exe (in red).
We see that a *Client.exe (Antivirus process, in blue) is sending TCPCopy packets in the window between we sent and received.
Also, we can see that the first request was made with APIPA IPv6 and the 2nd call is IPv4, We checked the network interface and it was disabled.
Wireshark screenshot:
Process Monitor screenshot:
Network configuration:

Is my server expected to return a 200 for an HTTP OPTIONS methods when the connection point is forbidden to the current user?

Reading various documentations (such as the w3c CORS), I have to say that the OPTIONS does not seem that well documented at all.
I'm wondering whether my REST server is doing it right, which is returning a 403 if the client tries to access a connection point which is forbidden (whether it exists or not does not even always matter, although at times the server may return 404 instead.)
The OPTIONS documentation, though, does not seem to infer that such a return code is valid.
I've found this stackoverflow Q/A where the accepted answer seems to say that we can return any error code.
I would imagine that it would be a security hole to allow the OPTIONS to passthrough when the user is not allowed. At the same time, the OPTIONS defines the Access-Control-Allow-Credentials header and I don't see how I could ever make that header useful if I return a 403 on such connection points. (In other words, to me that sounds contradictory.)
Short answer: If you do actually want the OPTIONS response to be useful as far as CORS-enabling the server, then you shouldn’t return 403 just because the user isn’t logged in.
Details:
It’s never invalid for a server to return a 403 for an OPTIONS request. The CORS protocol doesn’t require your server to return a 2xx success response for an OPTIONS request. But if your server doesn’t return a 2xx for OPTIONS requests to a particular URL, then CORS preflight OPTIONS requests to that URL will fail. So if that’s what you actually want to happen, then responding with a 403 is fine — so is responding with a 405 or a 501 or whatever other response code might have a meaning appropriate to your particular case.
But it’s important to keep in mind that the CORS protocol requires browsers to never send authentication credentials as part of the CORS preflight OPTIONS request. So if your server is configured to require authentication in order for OPTIONS requests to produce 2xx success responses, then all CORS preflights coming from browsers are going to fail 100% of the time. In other words, you’d be ensuring that all requests will fail that are coming to the server from frontend JavaScript code running in a browser and that add custom response headers (e.g., Content-Type or Authorization headers) and so that trigger preflights.
I’m not aware of any specific security hole that could occur by responding with 2xx to unauthenticated OPTIONS requests (from users who are not allowed). That doesn’t permit users to get any info from your server beyond whatever you intentionally choose to put into the response headers you send in response to the OPTIONS request. And of course it doesn’t prevent you from requiring authentication for GET or POST or whatever other methods. But in terms of the CORS protocol, it’s the only way for your server to indicate what methods and request headers it allows in requests from frontend JavaScript.
Note also: the current actively-maintained requirements for CORS are defined in the Fetch spec https://fetch.spec.whatwg.org. The https://w3.org/TR/cors spec is obsolete and no longer maintained and shouldn’t be used for anything (see https://w3.org/2017/08/16-webappsec-minutes.html#item03 and the answer at https://stackoverflow.com/a/45926657/441757).
I don’t know why the https://w3.org/TR/cors spec isn’t already been clearly marked obsolete, but I’ll try to make sure it gets marked as such soon.

Which takes precedence VIA or Contact header?

As the title says which should take precedence when replying back to a UAS. I have a provider that sends a VIA header that is different to the contact header. They are stating that I should be sending SIP signalling back to the URI in the contact header. But Kamailio is sending it back to URI in the VIA header.
I can not find an RFC which shows the precedence.
Thanks
Via and Contact Header serves different purpose:
Via Header: It indicates the list of all the network node (server,proxy-server etc.) the request traversed from originating point to the endpoint.
Via Header is used by the User Agent Server (UAS) to return/reply the SIP status responses (e.g. SIP 100 Trying, 180 Ringing, etc.)
Contact Header: Contact header basically contains the SIP URI of the end user, that the originating user can use to send future requests to. That is, requests that belong to the same dialog, such as re-INVITE, BYE and ACK messages. (The Contact header field has a role similar to the Location header field in HTTP.)
There is no precedence, they serve different purposes.
The response should be sent to the URI in the VIA header.
You can use the Contact header URI to compute the Request URI for new requests within this session.

Handle REST API timeout in time consuming operations

How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.

Notify SIP proxy call has ended

The SIP "BYE" message is usually delivered from one SIP user agent to the other directly if the call is finished. How then can a SIP proxy, switch or exchange monitor if a call has ended?
If a proxy is interested in a call - in particular, it wants to know about BYEs - it requests its addition to the SIP route by adding a Record-Route header.
The SIP RFC has some example call flows illustrating the use of the header, but briefly, dialog-establishing requests (like INVITE, REFER, etc.) en route to the callee pass through various proxies. These add themselves to the route set of the dialog. When the callee constructs its response - or sends its own in-dialog request - it adds these servers' URIs to the messages it sends as Route headers. (I'm skipping some hairy details around Route headers and the Request-URI and RFC 2543 compatibility.)
Alternatively, a UA may be configured to use a certain chain of proxies: when it sends its INVITE, it will use Route headers (and the Request-URI) to force the message to travel a particular route.