I’m interesting in processing chunked requests with Scalatra. Does Scalatra support an access to the single chunk of chunked http request? Or I only have to wait the end of a chunked request and process the whole request after?
The Scalalatra is just a wrapper around Java Servlets. It allows you to access richRequest.inputStream directly. Everything else is same as for Java.
You might need to parse chunked encoding from the input stream.
See also: Chunked http decoding in java?
You can find a wrapper for InputStream here: http://www.java2s.com/Code/Java/File-Input-Output/AnInputStreamthatimplementsHTTP11chunking.htm
Related
I am trying to figure if there is a way to stream large files chunks by chunks using HTTP transfer-encoding CHUNKED between client and server in REST. By Semantics REST service provider accepts only application/json, but I read this Chunked Transfer Encoding and thinking if this is something I can make it possible using any REST client, say for example apache Http client.
Handling large files (memory overhead will be more during normal/huge loads) is always a challenge with REST API during transfer, so is there an optimistic solution for this.
If not chunking is there any other way like reading bytes into fixed buffer and transmit over HTTP. The service provider is not willing to change the REST contract and always expects application/json media type with attachment being a part of multipart request.
The usecase that comes to my mind is how to handle attachments in email typically big in size.
Please advise.
The main use-case for Chunked transfer encoding is sending data for which you can't accurately predict the Content-Length.
So if your JSON is built up dynamically, and you can't accurately know in advance how many bytes it will be, chunked is the way to go.
However, if you can predict the Content-Length, it's not needed to use Chunked, and you shouldn't, as there is a bit of overhead in doing so. Chunked is not a requirement for streaming responses.
Also note:
Most programming languages/HTTP frameworks will automatically use Chunked encoding in circumstances where the Content-Length cannot be predicted. For Node.js this is for example any time you just write to the response stream, and for PHP this is basically always unless you explicitly set a Content-Length. So in many cases there's basically nothing you need to do to switch to this.
Chunked no longer exists in HTTP/2 as HTTP/2 has its own framing protocol that supports responses with unknown sizes.
I need to know how HTTP/1.1, webSocket and HTTP/2.0 works in terms of Socket (I am not interested in a list of different features between these three technologies).
So, when I start an HTTP/1.1 request I know that after server response, my connection will be closed.
But, let me say, When I start an HTTP/1.1. request, at transport layer level, a socket will be inizialized to send my HTTP request (header and data) to the webserver.
So I have three questions:
If HTTP/1.1 implements a socket (open from my pc and webserver) to send its request, why it can not use that socket to implement request-response cycle more and more times ?
Is The principal different between HTTP/1.1 and webSocket the fact that HTTP/1.1 close the socket after the first request-response cycle and webSocket don't close the socket after first cycle ?
How HTTP/2.0 manages socket between client and server ?
Thanks in advance.
To answer your question:
Actually, HTTP/1.1 allows the connection to be used for more than a single request by using the "keep-alive" feature.
This means that multiple HTTP/1.1 requests might be sent over a single TCP/IP connections.
However, since HTTP/1.1 doesn't allow for multiplexing, the requests (and responses) are serialized, which might cause a longer request/response to delay short request/response due to the strict queue.
FYI: closing the connection is an HTTP/1 approach, where the end of a response would be marked by the socket closing. On HTTP/1.1, the end of the response is usually known by the "Content-Length" header (or the chunked encoding marker).
No, the difference is much bigger. The WebSocket protocol isn't a request-response protocol, it's a message based protocol, which is a totally different beast.
In effect, you can think about WebSockets as more similar to TCP/IP than to HTTP, except that TCP/IP is a streaming protocol and WebSockets is a message based protocol...
The WebSockets protocol promises that messages don't arrive fragmented while TCP/IP read calls might return a fragment of a message (or more than a single message).
HTTP/2.0 uses a single connection, but it has a binary message wrapping layer that allows the server and the client to multi-plex (manage more than a single information stream using a single connection).
This means that the request-response queue is parallel (instead of the HTTP/1.1 serial queue). For example, a response to request #2 might arrive before a response to request #1.
This solves the HTTP/1.1 issue of "pipelining" and message ordering, where a long request/response cycle might cause all the other requests to "wait".
There are other properties and differences, but in some ways this is probably the main one (in addition to other performance factors such as header compression, binary data formats, etc').
Is there a way to configure compression (GZIP) for all the response by doing changes only in application.conf?. What are the best ways to compress the response?
Is there a way in Play to pass data compressed with Gzip over WebSocket channels? I read in https://www.playframework.com/documentation/2.3.1/GzipEncoding about GzipFilter. Is there a way to use it with WebSocket?
Besides my server also accepts HTTP requests but I don't want to apply GZIP to them. Is there a way to use Gzip ONLY WITH WebSocket, excluding HTTP?
AFAIK there is some compression extension for websocket connections. (https://datatracker.ietf.org/doc/html/draft-tyoshino-hybi-permessage-compression-00).
In some browsers this should be fixed by now and enabled by default (Chrome).
In others (Firefox, WebKit) it is not (yet): https://bugzilla.mozilla.org/show_bug.cgi?id=792831 and https://bugs.webkit.org/show_bug.cgi?id=98840
As long as client and server support this there shouldn't be any problem just to enable Gzip.
You can configure when Gzip is used (at least to some degree). For example if you have an JSON API and you also serve normal HTML, you can decide to Gzip only the JSON data:
new GzipFilter(shouldGzip = (request, response) =>
response.headers.get("Content-Type").exists(_.startsWith("application/json")))
Allow me to be more specific...
If I want to send files, but am required to wrap them in SOAP, wouldn't I use http? I am seeing a surprisng lack of info on this online.
Sending files via SOAP doesn't have anything specifically to do with FTP. To send a file through a SOAP interface, you might base64 encode the file and stuff the whole thing into a SOAP string parameter. However, this might only be appropriate if your file size has a reasonable upper bound.
If your files can be an unbounded size, you might investigate using a different transport protocol to transfer the actual file data (eg. HTTP or even FTP), then use SOAP to transfer a pointer to the file (such as its URL). Some implementations of SOAP cannot handle arbitrary large messages.
Pretty vague question but if you're using web services you can use MTOM http://en.wikipedia.org/wiki/MTOM (SOAP Message Transmission Optimization Mechanism)
I don't know your environment but there are examples of this using .NET / WCF if you Google it.
Two standard ways of sending files along with SOAP messages are:
SOAP with Attachments
MTOM
MTOM supports either using MIME attachments or base64 encoding the file into the body, where as SOAP with Attachments only supports MIME attachments.