In request handler, processing e.g. GET https://example.com/collections/1 or POSThttp://0.0.0.0:8080/collections how do I get server address https://example.com and http://0.0.0.0:8080 respectively?
Currently I'm constructing it like so
var url = "\(httpPrefix)\(server.serverAddress)"
if server.serverPort != 443 { url += ":\(server.serverPort)" }
where httpPrefix is:
let httpPrefix = isLinux ? "https://" : "http://"
But it feels like there's a better way...
I've discovered that host header would contain either example.com or 0.0.0.0:8080 depending on the server.
So the following produces just the result I need. httpScheme is still hardcoded (which is something I don't like, but don't see other options yet).
httpScheme + request.header(.host)!
I'm force-unwrapping, as I observe host header is always there.
Related
I'm trying to redirect requests to some URL. My server is using rust with warp.
The library https://docs.rs/warp-reverse-proxy/latest/warp_reverse_proxy/ seems to do exactly what I need, but when it redirects with all the headers the server receiving the request rejects it for 2 reasons:
the host header isn't what it expects
the content-length isn't what it expects
I tried removing those headers but I can't make the filters work. I tried so many combinations that pasting all of them here won't be very helpful.
Is there a preferred way of capturing a request, removing some headers and then redirect it?
Edit to add some code:
warp::serve(
warp::get()
.and(warp::path("graphiql"))
.and(juniper_warp::graphiql_filter("/graphql", None))
.or(warp::path("graphql").and(graphql_filter))
// .or(reverse_proxy_filter(
// "".to_string(),
// "https://example.com/".to_string(),
// ))
.with(log),
)
.run(([127, 0, 0, 1], 8080))
.await
user -> proxy.com -> example.com
The reverse_proxy_filter redirects, but the Host on the other side rejects the call. These are in different domains. So when example.com receives the request it rejects it because the Host header has the original domain.
Edit: Solved by adding another filter:
.or(warp::any().and(request_filter).and_then(
|path, query, method, mut headers: Headers, body: Body| {
headers.remove("Host");
headers.remove("Content-length");
proxy_to_and_forward_response(
"https://example.com".to_string(),
"".to_string(),
path,
query,
method,
headers,
body,
)
},
))
Only answers I can find for this have specific parameters and don't work if the parameters change.
I have a URL coming in such as:
/logs.php?user_id=10032&account_id=10099&message=0&interval=1&online=1&rsid=65374826&action=update&ids=827,9210,82930&session_id=1211546313-1602275138
I need to redirect this url to a completely different domain and file but keep the get params.
So it ends up redirecting to:
https://example.com/the_logger.php?REPEAT_ALL_GET_PARAMETERS_HERE
Here is what I have unsuccessfully tried so far:
location logs.php
{
rewrite https://example.com/the_logger.php?$args last;
}
But it doesn't seem to match the url or redirect. I think I'm misunderstanding the logic of nginx confs. If it were .htaccess I think I'd be okay. I can put a few more examples here if need be of what I'm trying to achieve.
As you state this is a redirect and not reverse proxying a request, I would use the return directive to tell the client to do a 301 or 302. Using the return directive is the simpler and more recommend approach to redirecting a client.
Something like should do what you want:
location /logs {
return 302 https://example.com/the_logger.php$is_args$args;
}
Where $is_args would output a ? if and only if the query string is not empty, and $args is the query string itself
Let's say I have a bunch of clients who all have their own numeric IDs. Each of them connect to my server through SockJS, with something like:
var sock = new SockJS("localhost:8080/sock/100");
In this case, 100 is that client's numeric ID, but it could be any number with any number of digits. How can I set up a SockJS router in my server-side code that allows for the client to set up a SockJS connection through a URL that varies based on what the user's ID is? Here's a simplified version of what I have on the server-side right now:
public void start() {
HttpServer server = vertx.createHttpServer();
SockJSHandler sockHandler = SockJSHandler.create(vertx);
router.route("/sock/*").handler(sockHandler);
server.requestHandler(router::accept).listen(8080);
}
This works fine if the client connects through localhost:8080/sock, but it doesn't seem to work if I add "/100" to the end of the URL. Instead of getting the default "Welcome to SockJS!" message, I just get "Not Found." I tried setting a path regex and I got an error saying that sub-routers can't use pattern URLs. So is there some way to allow for the client to connect through a variable URL, whether it's /sock/100, /sock/15, or /sock/1123123?
Ideally, I'd be able to capture the numeric ID that the client uses (like with routing REST API calls, when you could add "/:ID" to the routing path and then capture the value that the client uses), but I can't find anything that works for SockJS connections.
Since it seems that SockJS connections are considered to be the same as sub-routers, and sub-routers can't have pattern URLs, is there some work-around for this? Or is it not possible?
Edit
Just to add to what I said above, I've tried a couple different things which haven't seemed to work yet.
I tried setting up an initial, generic main router, which then re-directs to the SockJS handler. Here's the idea I had:
router.routeWithRegex("/sock/\\d+").handler(context -> {
context.reroute("/final");
});
router.route("/final").handler(SockJSHandler.create(vertx));
With this, if I access localhost:8080/sock/100 directly through the browser, it takes me to the "Welcome to SockJS!" page, and the Chrome network tab shows that a websocket connection has been created when I test it through my client.
However, I still get an error because the websocket shows a 200 status code rather than 101, and I'm not 100% sure as to why that is happening, but I would guess that it has to do with the response that the initial handler produces. If I try to set the initial handler's status code to 101, I still get an error, because then the initial handler fails.
If there's some way to work around these status codes (it seems like the websocket is expecting 101 but the initial handler is expecting 200, and I think I can only pick one), then that could potentially solve this. Any ideas?
I enabled gzip compression for all the responses in my web service (Play 2.4) by following those instructions. Easy to set up, and I can see it works like a charm having checked with curl and wireshark that the responses are sent compressed.
Now I want to be a good developer and add an integration test to make sure no one breaks HTTP compression next week. Here's where the fun begins! My test looks like this:
"use HTTP compression" in {
forAll(endPoints) { endPoint =>
val response = await(
WS.url(Localhost + port + "/api" + endPoint).withHeaders("Accept-Encoding" -> "gzip").get()
)
response.header("Content-Encoding") mustBe Some("gzip")
}
}
However, the test fails as WS's response headers don't include content enconding information, and the body is returned as plain text, uncompressed.
[info] - should use HTTP compression *** FAILED ***
[info] forAll failed, because:
[info] at index 0, None was not equal to Some("gzip") (ApplicationSpec.scala:566)
Checking the traffic in wireshark when running this test I can clearly see the server is returning a gzip-encoded response, so it looks like WS is somehow transparently decompressing the response and stripping the content-encoding headers? Is there a way I can get the plain, compressed response with full headers so I can check whether the response is compressed or not?
I don't think you can do that. If I'm not mistaken , the problem here is that Netty return the content already uncompressed, so the header is removed also.
There is a configuration in AsyncHTTPClient to set that (setKeepEncoding), but unfortunately this only works in version 2.0 and newer, and Play 2.4 WS lib uses version 1.9.x.
Either way, the client Play gives you is already configured, and I don't know if you are able to tweak it. But you can create a new client to emulate that behavior:
// Converted from Java code: I have never worked with those APi's in Scala
val cfg = new AsyncHttpClientConfig.Builder().addResponseFilter(new ResponseFilter {
override def filter[T](ctx: FilterContext[T]): FilterContext[T] = {
val headers = ctx.getRequest.getHeaders
if (headers.containsKey("Accept-Encoding")) {
ctx.getResponseHeaders.getHeaders.put("Content-Encoding", List("gzip"))
}
ctx
}
}).build()
val client: NingWSClient = NingWSClient(cfg)
client.url("...") // (...)
Again, this is just emulating the result you need. Also, probably a more clever logic than just add gzip as Content-Encoding (ex: put the first algorithm requested in "Accepts Encoding") is advised.
Turns out we can't really use Play-WS for this specific test because it already returns the content uncompressed and stripped of the header (see #Salem's insightful answer), so there's no way to check whether the response is compressed.
However it's easy enough to write a test that checks for HTTP compression using standard Java classes. All we care about is whether the server answers in (valid) GZIP form when sending a request with Accept-Encoding: gzip. Here's what I ended up with:
forAll(endPoints) { endPoint =>
val url = new URL(Localhost + port + "/api/" + endPoint)
val connection = url.openConnection().asInstanceOf[HttpURLConnection]
connection.setRequestProperty("Accept-Encoding", "gzip")
Try {
new GZIPInputStream(connection.getInputStream)
} must be a 'success
}
Upon executing an HTTP Get request, I receive the following error:
2015/08/30 16:42:09 Get https://en.wikipedia.org/wiki/List_of_S%26P_500_companies:
stopped after 10 redirects
In the following code:
package main
import (
"net/http"
"log"
)
func main() {
response, err := http.Get("https://en.wikipedia.org/wiki/List_of_S%26P_500_companies")
if err != nil {
log.Fatal(err)
}
}
I know that according to the documentation,
// Get issues a GET to the specified URL. If the response is one of
// the following redirect codes, Get follows the redirect, up to a
// maximum of 10 redirects:
//
// 301 (Moved Permanently)
// 302 (Found)
// 303 (See Other)
// 307 (Temporary Redirect)
//
// An error is returned if there were too many redirects or if there
// was an HTTP protocol error. A non-2xx response doesn't cause an
// error.
I was hoping that somebody knows what the solution would be in this case. It seems rather odd that this simple url results in more than ten redirects. Makes me think that there may be more going on behind the scenes.
Thank you.
As others have pointed out, you should first give thought to why you are encountering so many HTTP redirects. Go's default policy of stopping at 10 redirects is reasonable. More than 10 redirects could mean you are in a redirect loop. That could be caused outside your code. It could be induced by something about your network configuration, proxy servers between you and the website, etc.
That said, if you really do need to change the default policy, you do not need to resort to editing the net/http source as someone suggested.
To change the default handling of redirects you will need to create a Client and set CheckRedirect.
For your reference:
http://golang.org/pkg/net/http/#Client
// If CheckRedirect is nil, the Client uses its default policy,
// which is to stop after 10 consecutive requests.
CheckRedirect func(req *Request, via []*Request) error
I had this issue with Wikipedia URLs containing %26 because they redirect to a version of the URL with & which Go then encodes to %26 which Wikipedia redirects to & and ...
Oddly, removing gcc-go (v1.4) from my Arch box and replacing it with go (v1.5) has fixed the problem.
I'm guessing this can be put down to the changes in net/http between v1.4 and v1.5 then.