After executing some performance using gatling, i have noticed more bandwidth results than expected on the server side.
I think/thought galitng encoding (gzig, deflate) by default on every request, but now I am not sure about that.
By activating the logging (to see what request headers are sent), the "Accept-Encoding" header is not displayed.
I think/thought Gatling encoding (gzig, deflate) by default on every request, but now I am not sure about that.
Why would that be? No, it's not, that's something you have to specify, for example as a default header on the HTTP protocol.
Related
I'm working on a Delphi REST client for a public API that requires an HMAC256/Base64 signed string to be added to the headers of the request to authenticate. I've spent hours trying to figure out why it's not working, so I compared the raw request from my Delphi client to that of a working C# library (using Wireshark).
It turns out my request matches perfectly the request generated by the working C# library, except that Delphi's REST client is URL-encoding the values added to the request's header, therefore invalidating the carefully crafted signature.
This is how I'm adding the signature string to the header:
RESTRequest1.Params.AddHeader('SIGNATURE', FSignature);
The signature string may have slashes, plus signs, and/or equal signs that are being URL-encoded when they shouldn't. For example when the value of the signature string is...
FSignature = '8A1BgACL9kB6P/kXuPdm99s05whfkrOUnEziEtU+0OY=';
...then the request should should output raw headers like...
GET /resource HTTP/1.1
User-Agent: Embarcadero URI Client/1.0
Connection: Keep-Alive
<snip>
SIGNATURE: 8A1BgACL9kB6P/kXuPdm99s05whfkrOUnEziEtU+0OY=
<snip>
...but instead Wireshark shows this as the real value being sent...
SIGNATURE: 8A1BgACL9kB6P%2FkXuPdm99s05whfkrOUnEziEtU%2B0OY%3D
Is there a way to prevent the URL-encoding of values when using AddHeader? Or maybe another way to add raw headers to a TRESTClient request?
PS: I already tried both TRESTRequest.Params.AddHeader and TRESTClient.AddParameter with TRESTRequestParameterKind.pkHTTPHEADER as the Kind parameter. Both resulted in URL-encoded values.
PS2: Using Delphi RAD Studio 10.3.
You should include poDoNotEncode in the Options property of the TRESTRequestParameter.
This can be done using:
RESTClient1.AddParameter('SIGNATURE', FSignature, pkHTTPHEADER, [poDoNotEncode]);
or by using:
RESTClient1.Params.AddHeader('SIGNATURE', FSignature).Options := [poDoNotEncode];
Is the a way in akka-http to know if 'Content-type' header was explicitly set in HttpResponse that we received?
From sniffed Http dump I see, that there was no 'Content-Type' header, but
httpResponse.header[`Content-Type`].get.contentType.mediaType.toString()
and
httpResponse.entity.getContentType().mediaType.toString
stil return application/octet-stream.
This is default Content type not only for Akka-HTTP, but perhaps for other frameworks like Play too. Akka-http and other HTTP based technologies need to know how to parse content internally, based on this header. application/octet-stream means that it considers request body as just byte-stream.
Rule of thumb: if it is possible - try to specify Content-type.
We have a REST API to fetch binary files from the server.
The requests look like
GET /documents/e62dd3f6-18b0-4661-92c6-51c7258f9550 HTTP/1.1
Accept: application/octet-stream
For every response indicating an error, we'd like to give a reason in JSON.
The problem is now, that as the response is not of the same content type as the client requested.
But what kind of response should the server produce?
Currently, it responds with a
HTTP / 1.1 406 Not Acceptable
Content-Type: application/json
{
reason: "blabla"
...
}
Which seems wrong to me, as the underlying issue is, that the resource is not existing and not the client requesting the wrong content type.
But the question is, what would be the right way to deal with such situations?
Is it ok, to respond with 404 + application/json although application/octet-stream was requested
Is it ok, to respond with 406 + application/json, as the client did not specify an application/json as an acceptable type
Should spec been extended so that the client should use the q-param - for example, application/octet-stream, application/json;q=0.1
Other options?
If no representation can be found for the requested resource (because it doesn't exist or because the server wishes to "hide" its existence), the server should return 404.
If the client requests a particular representation in the Accept header and the server is not available to provide such representation, the server could either:
Return 406 along with a list of the available representations. (see note** below)
Simply ignore the Accept header and return a default representation of the resource.
See the following quote from the RFC 7231, the document the defines the content and semantics of the HTTP/1.1 protocol:
A request without any Accept header field implies that the user agent will accept any media type in response. If the header field is present in a request and none of the available representations for the response have a media type that is listed as acceptable, the origin server can either honor the header field by sending a 406 (Not Acceptable) response or disregard the header field by treating the response as if it is not subject to content negotiation.
Mozilla also recommends the following regarding 406:
In practice, this error is very rarely used. Instead of responding using this error code, which would be cryptic for the end user and difficult to fix, servers ignore the relevant header and serve an actual page to the user. It is assumed that even if the user won't be completely happy, they will prefer this to an error code.
** Regarding the list of available representations, see this answer.
I am currently working on an application that is supposed to get a web page and extract information from its content.
As I learned from my research (or as it seems to me at least), there is no ideal way to determine the end of an HTTP message.
Generally, I found two different ways to do so:
Set O_NONBLOCK flag for the socket and fetch data with recv() in a while loop. Assume that the message is complete and break if it occurs once that there are no bytes in the stream.
Rely on the HTTP Content-Length header and determine the end of the message with it.
Both ways don't seem to be completely safe to me. Solution (1) could possibly break the recv loop before the message was completed. On the other hand, solution (2) requires the Content-Length header to be set correctly.
What's the best way to proceed in this case? Can I always rely on the Content-Length header to be set?
Let me start here:
Can I always rely on the Content-Length header to be set?
No, you can't. Content-Length is an optional header. However, HTTP messages absolutely must feature a way to determine their body length if they are to be RFC-compliant (cf RFC7230, sec. 3.3.3). That being said, get ready to parse on chunked encoding whenever a content length isn't specified.
As for your original problem: Ensuring the completeness of a message is actually something that should be TCP's job. But as there are such complicated things like message pipelining around, it is best to check for two things in practice:
Have all reads from the network buffer been successful?
Is the number of the received bytes identical to the predicted message length?
Oh, and as #MartinJames noted, non-blocking probably isn't the best idea here.
The end of a HTTP response is defined:
By the final (empty) chunk in case Transfer-Encoding chunked is used.
By reaching the given length if a Content-length header is given and no chunked transfer encoding is used.
By the end of the TCP connection if neither chunked transfer encoding is used not Content-length is given.
In the first two cases you have a well defined end so you can verify that the data were fully received. Only in the last case (end of TCP connection) you don't know if the connection was closed before sending all the data. But usually you get either case 1 or case 2.
To make your life easier, you might want to provide
Connection: close
header when making HTTP request - than web-server will close connection after giving you the full page requested and you will not have to deal with chunks.
It is only a viable option if you only are interested in this single page, and will not request additional resources (script files, images, etc) - in latter case this will be a very inefficient solution for both your app and the server.
I am designing a REST API and am running into a design issue. I have alerts that I'd like the user to be able to export to one of a handful of file formats. So we're already getting into actions/commands with export, which feels like RPC and not REST.
Moreover, I don't want to assume a default file format. Instead, I'd like to require it to be provided. I don't know how to design the API to do that, and I also don't know what response code to return if the required parameter isn't provided.
So here's my first crack at it:
POST /api/alerts/export?format=csv
OR
POST /api/alerts/export/csv
Is this endpoint set up the way you would? And is it set up in the right way to require the file format? And if the required file format isn't provided, what's the correct status code to return?
Thanks.
In fact you should consider HTTP content negotiation (or CONNEG) to do this. This leverages the Accept header (see the HTTP specification: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1) that specifies which is the expected media type for the response.
For example, for CSV, you could have something like that:
GET /api/alerts
Accept: text/csv
If you want to specify additional hints (file name, ...), the server could return the Content-Disposition header (see the HTTP specification: http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) in the response, as described below:
GET /api/alerts
Accept: text/csv
HTTP/1.1 200 OK
Content-Disposition: attachment; filename="alerts.csv"
(...)
Hope it helps you,
Thierry