I want to parse JUST SIP Custom Headers [ or call attached data ] not
other header fields.
As I see some guys use "X-" to distinguish those headers from standard headers.
But that is just convention.
In some systems , attached-data [ custom-headers ] are put into just after "Content-Length: " header but some put it after "Contact: " header.
I really can not find a generic AND elegant way, just to parse custom headers.
The only bad solution is create a Look-Up Table which contains all standard sip headers names and if header name is not in that list, parse that header which is ugly...
Any suggestion for more elegant solution ?
It depends on what you want to accomplish, but since servers/clients/proxy can inject any header they want, your only solution really is to have a white-list of valid header names. The major down-side is you have to take care of any new RFCs that define new "official" headers.
Depending on the use-case, you might want to just go for headers starting with X-. As you said, it's just a convention, but it's one that in wide use, IIRC.
Related
I'm trying to add a new content type to a REST endpoint. Currently it only returns json but I now need to be able to return also a CSV file.
As far as I know, the best way to do this is by using the Accept header with value text/csv and then add a converter that is able to react to this and convert the returned body to the proper CSV representation.
I've been able to do this but then I have a problem handling exceptions. Up until know, all the errors returned are in json. The frontend expects any 500 status code to contain a specific body with the error. But now, by adding the option to return either application/json or text/csv to my endpoint, in case of an error, the converter to be used to transform the body is going to be either the jackson converter or my custom one depending on the Accept header passed. Moreover, my frontend is going to need to read the content-type returned and parse the value based on the type of representation returned.
Is this the normal approach to handle this situation?
A faster workaround would be to forget about the Accept header and include a url parameter indicating the format expected. Doing it this way, I'd be able to change the content-type of the response and the parsing of the data directly in the controller as the GET request won't include any Accept header and it will be able to accept anything. There are some parts of the code already doing this where the only expected response format is CSV so I'm going to have a difficult time defending the use of the Accept header unless there is a better way of handling this.
my frontend is going to need to read the content-type returned and parse the value based on the type of representation returned.
Is this the normal approach to handle this situation?
Yes.
For example, RFC 7807 describes a common format for describing problems. So the server would send an application/problem+json or an application/problem+xml representation of the issue in the response, along with the usual meta data in the headers.
Consumers that understand application/problem+json can parse the data with in, and forward a useful description of the problem to the user/logs whatever. Consumers that don't understand that representation are limited to acting on the information in the headers.
A faster workaround would be to forget about the Accept header and include a url parameter indicating the format expected.
That's also fine -- more precisely, you can have a different resource responsible for the each of the different media-types that you support.
It may be useful to review section 3.4 of RFC 7231, which describes the semantics of content negotiation.
RFC 4180 talks about the text/csv mime type, but doesn't go into some of the more obvious variations that one would have to support, for example the separator used (ok ok so if it's Csv then it should obviously be a comma) and whether there is a header row or not.
I'd like to add parameters like separator=, and headerrow=0|1 to the mime type such that REST APIs that consume CSV files can consume them appropriately.
The alternative would seem to be to add the parameters to the URL but that feels wrong.
Starting with Apiary, I'm currently specifying the APIs for our project.
I was able to define the definitions for the API and defining the parameters works great.
Now I would like to add also the values passed in the HTTP Headers into my documentation (like pagination, version number of the API,...)
When browsing through the documentation I found that headers could be adden within the payload block or the request block, but I want them to be displayed in the documentation.
Is this possible and what's the best way to achieve this?
You can now use the Headers section.
Parameters are actually not required to be present in the URL template.
Thus, what you can do is to just have
+ Parameters
+ id (required, number, `1`) ... Numeric `id` of the Note to perform action with. Has example value.
+ X-My-Header (required, number, `5469`) ... My header does something
and this is going to be rendered in the table you mentioned as well.
You are going to have a warning from the parser, but it should work as expected.
I'm working with a web application that sends some non-standard HTTP headers in its response to a login request. The header in question is:
SSO_STATUS: LoginFailed
I tried to extract it with LWP::Response as $response->header('SSO_STATUS') but it does not work. It does work for standard headers such as Set-Cookie, Expires etc.
Is there a way to work with the raw headers?
if you see the documentation of HTTP::Headers, it states that
The header field name spelling is normally canonicalized
including the '_' to '-' translation. There are some application where
this is not appropriate. Prefixing field names with ':' allow you to
force a specific spelling. For example if you really want a header field
name to show up as foo_bar instead of "Foo-Bar", you might set it like
this:
$h->header(":foo_bar" => 1);
These field names are returned with the ':' intact for
$h->header_field_names and the $h->scan callback, but the colons do
not show in $h->as_string.
See this thread on Perlmonks.
You need to access the value of the header field as $response->header('SSO-STATUS').
The syntax for setting fields with underscores in names:
$response->header(':SSO_STATUS' => 'foo');
Am I breaking any laws in the REST bible by returning application/octet-stream for my responses ? The REST endpoint receives 5 image urls.
{ "image1": "http://ww.o.com/1.gif",
"image2": "http://www.foo.be/2.gif" }
and it will download these and return them as application/octet-stream.
CLARIFICATION: The client that invokes this REST interface is a mobile app. Every additional network connections made will reduce battery life by a few milliamps. I am forced to use REST because it is a company standard. If not, I will do my own binary protocol.
It is not so good, as the client will not know what to do with such binary data except of storing those bytes somewhere or sending them further to some other process (if this is all you need to do with your data, then it is fine).
You may take a look at multipart content types. IMO, a multipart message containing several image/gif parts would be a better alternative.
From the sounds of this, this sounds much more like an RPC call. Specifically, "here's a list of URLs, send me back an archive".
That process is not particularly RESTful, as REST is not an RPC based system.
What you need to do is treat the archives as reources, and a way to create and then serve them up.
For example you could:
POST /archives
Content-Type: application/json
{ "image1": "http://ww.o.com/1.gif",
"image2": "http://www.foo.be/2.gif" }
As a result, you would get
HTTP/1.1 201 Created
Location: http://example.com/archives/1234
Content-Type: application/json
Then, you could make a request to http://example.com:
GET /archives/1234
Accept: multipart/mixed
Here, you will get the actual archive in a single request (like you want), only it's a multipart formatted result. (multipart/x-zip would work too, that's a zip file)
If you did:
GET /archives/1234
Accept: application/json
You would get back the JSON you sent originally (so you could, perhaps, edit and update the archive, something you may not want to support sending up the binary images).
To change it you would simply POST back the update:
PUT /archives/1234
Content-Type: application/json
{ "image1": "http://ww.o.com/1.gif",
"image2": "http://www.foo.be/2.gif",
"image3": "http://www.foo2.foo/4.gif" }
The resource is /archives/1234, that's its name.
It has two representations in this case: the JSON version, and the actual, binary archive. Your service distinguishes between the two using the content type specified in the Accept header. That header is the client telling you what it wants.
When you're done with the archive, simply DELETE it
DELETE /archives/1234
Or you can have the server expire the resource at some later time.
Why not have five separate REST calls?
Seems cleaner and divides more logically. It will also run the downloads in parallel, 2 or more at a time depending on the browser you are using.
They are called REST principles not laws, but no you are not "breaking" them, IMO. REST is about resources being addressable by a URL, and (where appropriate) available in multiple formats. It doesn't say what the format should be. There's a simple description of what REST means in this article.
However, as #Andrey says there are nicer ways to handle sending multiple data objects than inventing your own adhoc format. The Multipart mimeType / format is one alternative, and another is to send the objects packed up as a tar, zip or a similar archive file format.
IMO. the real problem with using "application/octet-stream" and is that it doesn't tell anyone anything about how the data is actually formatted. Rather your client has "know" how it is formatted, and interpret it accordingly. And the problems with inventing your own format are interoperability and (possibly) having to design, implement and maintain libraries to support it, possibly may times over.