Is there a way in Play to pass data compressed with Gzip over WebSocket channels? I read in https://www.playframework.com/documentation/2.3.1/GzipEncoding about GzipFilter. Is there a way to use it with WebSocket?
Besides my server also accepts HTTP requests but I don't want to apply GZIP to them. Is there a way to use Gzip ONLY WITH WebSocket, excluding HTTP?
AFAIK there is some compression extension for websocket connections. (https://datatracker.ietf.org/doc/html/draft-tyoshino-hybi-permessage-compression-00).
In some browsers this should be fixed by now and enabled by default (Chrome).
In others (Firefox, WebKit) it is not (yet): https://bugzilla.mozilla.org/show_bug.cgi?id=792831 and https://bugs.webkit.org/show_bug.cgi?id=98840
As long as client and server support this there shouldn't be any problem just to enable Gzip.
You can configure when Gzip is used (at least to some degree). For example if you have an JSON API and you also serve normal HTML, you can decide to Gzip only the JSON data:
new GzipFilter(shouldGzip = (request, response) =>
response.headers.get("Content-Type").exists(_.startsWith("application/json")))
Related
React/Next front-end + Node.js and MongoDB on the back?
How would video storage work?
You can use post requests to send files to your remote server. You backend code should read request data and then store the file onto the disk or any object storage like s3. Most backend web frameworks have many libraries to store files received in an HTTP request directly to s3
Most of the web development frameworks have the capability to guess the mimetype but here since you know it's video/mp4 you can just save it.
I must warn you if you are trying to upload huge files it might be a better idea to use chunked uploads. This gives you ability to pause and resume and is robust to network failures.
I wanted to know which kind of protocol apache beam uses to read and write to cloud storage. Is it HTTPS or Binary(Blob).I tried to google it but I did not find. I know gsutil command uses HTTPS protocol.
You are mixing 2 things: Transport layer and data encoding.
Does Google use HTTP transport? YES, for all the API. HTTPS or gRPC (HTTP/2) are commonly use.
Does Google use binary encoding to speed up the transfert? As said before, transport can be HTTPS or gRPC. HTTPS is commonly use for REST API, and transport JSON text format. Of course, you can exchange files in binary format (GZIP for example to compress and speed up the transfert). gRPC is a binary protocol. You don't exchange JSON, but binary representation of the data that you want to exchange. And thereby, the file transfert is also in binary mode.
Now, what use Beam? As often, the Google libraries use gRPC behind the scene, and thus, the encoding is binary. If you perform yourselves REST API call, with JSON, and HTTP will be use for that; but file content is, when it can (depends on your request accept content header), transferred in binary.
EDIT 1
For BEAM, I had a look to the source code. You have here, for example, the creation of GoogleCloudStorageImpl object.
If you have a look to this full class name: import com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl;. Ok, let's see the hadoop package!!
The Javadoc is clear: JSON API is used. to confirm that, I went to the source code, and YES, the JSON format is used for the API communication.
BUT, keep in mind that is the API communication, the metadata around the file content. The file content should be sent in binary format (plain text of b64 encoding should be strange).
I am trying to figure if there is a way to stream large files chunks by chunks using HTTP transfer-encoding CHUNKED between client and server in REST. By Semantics REST service provider accepts only application/json, but I read this Chunked Transfer Encoding and thinking if this is something I can make it possible using any REST client, say for example apache Http client.
Handling large files (memory overhead will be more during normal/huge loads) is always a challenge with REST API during transfer, so is there an optimistic solution for this.
If not chunking is there any other way like reading bytes into fixed buffer and transmit over HTTP. The service provider is not willing to change the REST contract and always expects application/json media type with attachment being a part of multipart request.
The usecase that comes to my mind is how to handle attachments in email typically big in size.
Please advise.
The main use-case for Chunked transfer encoding is sending data for which you can't accurately predict the Content-Length.
So if your JSON is built up dynamically, and you can't accurately know in advance how many bytes it will be, chunked is the way to go.
However, if you can predict the Content-Length, it's not needed to use Chunked, and you shouldn't, as there is a bit of overhead in doing so. Chunked is not a requirement for streaming responses.
Also note:
Most programming languages/HTTP frameworks will automatically use Chunked encoding in circumstances where the Content-Length cannot be predicted. For Node.js this is for example any time you just write to the response stream, and for PHP this is basically always unless you explicitly set a Content-Length. So in many cases there's basically nothing you need to do to switch to this.
Chunked no longer exists in HTTP/2 as HTTP/2 has its own framing protocol that supports responses with unknown sizes.
I'm comparing various options for hosting a static website. Right now I'm hesitating between Google App Engine and Google Cloud Storage.
For App Engine, I know from the documentation that it's possible to have content served compressed only for clients that declare support for this (via the HTTP Accept-Encoding header).
For Cloud Storage, I see that if you upload compressed content and set the Content-Encoding field to "gzip", Cloud Storage will correctly serve it back compressed to clients that declare support for that. My question is, what happens with Cloud Engine when a client does a GET on an object stored with "gzip" content encoding, but the client does not declare support for gzip-compressed data with accept-encoding in its request? Is the data decompressed on the fly (which is what I would hope), or is some kind or error returned, or is the data served compressed anyway (not great)?
Indeed you can store objects in Google Cloud Storage with Content-Encoding: gzip. If a subsequent request for this object does not include the Accept-Encoding: gzip header, the object will get decompressed on the fly, yes.
(Sidenote: Content-Encoding should not be confused with Content-Type, e.g., application/gzip, which is always left untouched.)
Are they any classes or libraries to read a gzipped stream from a server? For example, Java has the GZIPInputStream and GZIPOutputStream classes to read from a gzipped stream. Does iPhone SDK have such libraries or are there any external libraries that we can use?
Is this a web server, and can you tell it that the content encoding is gzip? If so, apparently NSURLRequest accepts gzip encoding transparently. In other words, you can make a request which looks like it's going to get the uncompressed data, the server can deliver gzip compressed data, and when you read it you'll get it decompressed already. You just need to be able to tell the server what's going on, really.