I have a question about web server (Nginx, Apache, etc)'s ability to serve posted content in chunked encoding.
Suppose we have two clients. Client 1 is posting chunked encoded content to a web server, and client2 is requesting the same content at the same time from the web server. Is the web server able to send the chunks to client2 in chunked encoding while receiving the chunks from client1?
I am thinking about client1 posting content to a fcgi backend, which also serves the content to client2. But it seems that many web server implementations buffer the chunks from client1 and only send the entire post body to the fcgi backend. This introduces unnecessary delay.
Any idea how to remove this delay?
Thanks.
Related
I use a C++ library called restbed as a webserver to deliver static HTML files. On the same machine I have another webserver running, and I would like to redirect some of the incoming connections to restbed. Depending on the request I would make the decision to redirect certain requests to the other server.
Is it technically possible and advised to connect two sockets with each other, if I get access to the underlying socket of the incoming HTTP connection?
If not, what would be a common approach for this? I can only have one TCP port for both services.
Yes, you can respond to an HTTP request by opening a connection to another HTTP server, forwarding the request to that server, and then forwarding the response back to the original client. In fact it's common for Internet-facing systems to include some kind of "front end" or "reverse proxy" or "L7 load balancer" or "API gateway" that does exactly this, often applying some kind of authentication, input validation, or routing logic in the process.
If you're building this yourself, it's not quite as simple as just opening a socket to the second HTTP server and forwarding the request verbatim. You should use some HTTP client library to send the request to the second server. In other words, the HTTP server that receives the original request should then turn around and be an HTTP client for the second server. When preparing the request for the second server, you should copy some but not all of the data out of the original request.
You should copy the HTTP method and URL.
You should probably not copy the scheme (http: or https:) because how the client chose to connect to the original server doesn't have to influence how that server will connect to the second server; you might be using HTTPS for the original server but forward requests using HTTP.
You should not copy the Host header unless for some reason the second server has been configured to respond to the same host name as the original server.
You should not copy headers that will confuse the HTTP client library you're using to connect to the second server. For example, if the client sends you Accept-Encoding: gzip then it is claiming to be able to accept gzipped responses, but if you forward that header, the second server will think that the HTTP client library you're using in your server can accept gzipped responses, whether it actually can or not.
You should forward cache control headers if you want the second server to be able to send 304 Not Modified if the client already has the file.
If you're just serving static files from the second server, then you can probably get something to work just by sending the HTTP method and URL only and ignoring the other request headers.
It's a similar story on the response side. You should probably copy some headers like Content-Type, but others, like Content-Length, will be set by your server, so you should not copy those headers. Try starting out by copying no headers and see if it works, then copy individual headers to address issues you discover. You will probably at least need to copy Content-Type.
HTTP has a lot of features, and I can't hope to go through all the possible situations here. The point I want to get across is that you can't just copy all the headers from one request or response into the other, because some of them may not apply, but you can't just copy none of them either. You have to understand what the headers do and handle them appropriately.
Which headers you should preserve depends a lot on how much handling of the request and response you're doing in the first server. The more the first server handles or interprets the request and/or response, the more its interaction with the second server becomes independent of its interaction with the client, and the fewer headers you should copy.
Summary
Is there a way to programmatically call REST URLs setup in JBoss via RESTEasy so that the programmatic method call actually drills down through the REST processor to find/execute the correct endpoint?
Background
We have an application that has ~20 different REST endpoints and we have set the application up to receive data from other federated peers. To cut down on cross network HTML requests, the peer site sends a bulk of requests to the server, and the receiving server needs to act upon the URL it receives. Example data flow:
Server B --> [Bulk of requests sent via HTTP/Post] --> Server A breaks list down to individual URLs --> [Begin Processing]
The individual URLs are REST URLs that the receiving server is familiar with.
Possible Solutions
Have the receiving server read through the URLs it receives, and call the management beans directly
The downside here is that we have to write additional processing code to decode the URL strings that are received.
The upside to this approach is that there is no ambiguity as to what happens
Have the receiving server execute the URL on itself
The receiving server could reform the URL to be http://127.0.0.1:8080/rest/..., and make a HTTP request on itself.
The downside here is that the receiving server could have to make a lot of HTTP requests upon itself (it's already somewhat busy processing "real" requests from the outside world)
Preferred: Have the receiving server access the main RESTEasy bean somehow and feed it the request.
Sort of combo of 1 & 2, without the manual processing of 1 or the HTTP requests involved with 2.
Technology Stack
JBoss 6.0.0 AS (2010 release) / Java 6
RESTEasy
How does webserver know the browser no longer requires response. Say for example:
Client/browser sends a request
Web server is processing
Client/browser moved to another page
When a new request comes from the client, does server kills the previous thread?
Each client request is bound with the browser's response :
Client/browser sends a request (request1)
Web server is processing
Client/browser moved to another page (request2)
Web server is processing
Web server returns response1
Client/browser ignores response1 (it is waiting response for its last request)
Web server returns response2
Client/browser displays response2
Clicking on several links at the same time will several generate requests, which will be time-consuming for the server. The browser will do the ignoring-irrelevant-response job.
To answer your first question, it can't know unless you implement a service to kill old server processes. That's why web servers have a timeout parameter.
I'm trying to upload a file onto my personal server.
I've written a small php page that works flawlessy so far.
The little weird thing is the fact that I generate all the body of the HTTP message I'm going to send (let's say that amounts to ~4 mb) and then I send the request to my server.
The server, then, asks for an HTTP challenge and my delegate connection:didReceiveAuthenticationChallenge:challenge replies to the server with the proper credentials and the data.
But, what's happened? The data has been sent twice!
In fact I've noticed that when I added the progressbar.. the apps sends the data (4mb), the server asks for authentication, the apps re-sends the data with the authentication (another 4mb). So, at the end, I've sent 8mb. That's wrong.
I started googling and searching for a solution but I can't figure out how to fix this.
The case scenarios are two (my guess):
Share the realm for the whole session (a minimal HTTP request, then challenge, then data)
Use the synchronized way to perform an HTTP connection (things that I do not want to do since it seems an ugly way to handle this kind of stuff to me)
Thank you
You've run into a flaw into the http protocol: you have to send all the data before getting the response with the auth challenge (when you send a request with no credentials). You can try doing a small round trip as the first request in the same session (as you've mentioned), like a HEAD request, then future requests will share the same nonce.
Too late to answer the original requester, but in time if somebody else read this.
TL;DR: Section 8.2.3 of RFC 2616 describes the 100 Continue status which is all what you need (were needing) in such a situation.
Also have a look at sections 10.1.1 and 14.20.
The client sends a request with an "Expect: 100-continue" header, pausing the request before sending the body. The server uses the already received headers to make its decision whether this request may be accepted or not (if the entity –the body– to be received is not too large, if the user's credentials are correct...). If the request is acceptable for the server, it replies with a "100 Continue" status code, the client sends the body and the server replies with the final status code for that request. To the contrary, if the request is not acceptable, the server replies with a 4xx status code ("413 Request Entity Too Large" if the provided body size is... too large, or a "401 Unauthorized" + the WWW-Authenticate: header) and the client does not send the body. Being answered with a 401 status code and the corresponding WWW-Authenticate: information, the client can now perform the request again and provides its credentials.
I am writing a collection of web services, one of which needs to implement server push.
The client will be native Objective-C. I want this to be as simple, fast, and lightweight as possible. The data transmitted will be JSON. Is it possible to do this without using a message broker?
There's an HTTP technique called COMET in which the client spins up a thread that makes a potentially very long-lived request to the HTTP server. Whenever the server wants to send something to the client, it sends a response to this request. The client processes this response and immediately makes another long-lived request to the server. In this way the server can send information while other things happen in the client's main execution thread(s). The information sent by the serve can be in any format you like. (In fact, for clients in a web browser doing COMET with a Javascript library, JSON is perfect.)
#DevDevDev: It's true that COMET is most often associated with a Javascript-enabled browser, but I don't think it has to be. You might check out iStreamLight, which is an Objective-C client for the iPhone that connects to COMET servers. It's also discussed in this interview with the author.