Rust warp redirect removing some headers - redirect

I'm trying to redirect requests to some URL. My server is using rust with warp.
The library https://docs.rs/warp-reverse-proxy/latest/warp_reverse_proxy/ seems to do exactly what I need, but when it redirects with all the headers the server receiving the request rejects it for 2 reasons:
the host header isn't what it expects
the content-length isn't what it expects
I tried removing those headers but I can't make the filters work. I tried so many combinations that pasting all of them here won't be very helpful.
Is there a preferred way of capturing a request, removing some headers and then redirect it?
Edit to add some code:
warp::serve(
warp::get()
.and(warp::path("graphiql"))
.and(juniper_warp::graphiql_filter("/graphql", None))
.or(warp::path("graphql").and(graphql_filter))
// .or(reverse_proxy_filter(
// "".to_string(),
// "https://example.com/".to_string(),
// ))
.with(log),
)
.run(([127, 0, 0, 1], 8080))
.await
user -> proxy.com -> example.com
The reverse_proxy_filter redirects, but the Host on the other side rejects the call. These are in different domains. So when example.com receives the request it rejects it because the Host header has the original domain.
Edit: Solved by adding another filter:
.or(warp::any().and(request_filter).and_then(
|path, query, method, mut headers: Headers, body: Body| {
headers.remove("Host");
headers.remove("Content-length");
proxy_to_and_forward_response(
"https://example.com".to_string(),
"".to_string(),
path,
query,
method,
headers,
body,
)
},
))

Related

Resumable chunked upload to google storage fails with CORS

I am creating a signedUrl for upload with Java SDK as follows:
val signedUrl = storage
.signUrl(
BlobInfo.newBuilder(BlobId.of(contentType.bucket, objectName)).build(),
15,
TimeUnit.MINUTES,
// chunked uploads of videos/audios must start with initial POST, see below
Storage.SignUrlOption.httpMethod(HttpMethod.POST),
Storage.SignUrlOption.withExtHeaders(
mapOf(
// https://stackoverflow.com/questions/55364969/how-to-use-gcs-resumable-upload-with-signed-url
"x-goog-resumable" to "start",
),
),
Storage.SignUrlOption.signWith(storageUploadCredentials),
Storage.SignUrlOption.withV4Signature(),
)
.toString()
Notice above, I am using HttpMethod.POST so that I can create the sessionUrl for resumable upload:
signedUrl
.httpPost()
// necessary to set up in CORS file
// https://stackoverflow.com/a/53324444
.header("x-goog-resumable", "start")
// https://stackoverflow.com/a/29057686
// https://stackoverflow.com/a/36798073
.header("Origin", origin)
.response()
Here I am passing the origin header, so that I could get correct CORS header Access-Control-Allow-Origin. However no such header is returned in the response. There should be I hope?
However when I get the location value and try to PUT to it with the Origin header, the Access-Control-Allow-Origin is missing as well. The strange thing is that if I POST to this session url, I get 405 error, but the Access-Control-Allow-Origin is there!
What am I doing wrong?
The reason was that HttpUrlConnection refused to add origin header and sun.net.http.allowRestrictedHeaders needed to be set to true.

mitmproxy save both original and forwarded request and response

What is a good way to save / write / log (request, response) pairs on both ends (client-facing, server-facing) of mitmproxy? That is, all 4 of:
c->p: original request from the client
p->c: final response to the client, possibly rewritten by the proxy
p->s: request forwarded to the server and possibly rewritten by the proxy
s->p: original response from the server
c->p p->s
original forwarded rewritten
request request
---------------------> -------------->
client proxy server
<--------------------- <--------------
p->c s->p
forwarded rewritten original
response response
As far as I can see, the -w flag (of mitmdump) only saves single (request, response) pairs, so either the original or the forwarded request / response are not saved (and possibly neither of those, only some intermediate stage, before / after other addons that modify request / response).
Thanks for help.

CORS requests with preflight on CouchDB

We are trying to send HTTP cross domain requests to CouchDB.
To achieve this, we set up CouchDB with the following settings (inspired from the add-cors-to-couchdb tool):
[HTTPD]
enable_cors = true
[CORS]
origins = *
credentials = true
methods = GET, PUT, POST, HEAD, DELETE
headers = accept, authorization, content-type, origin, referer, x-csrf-token
And wrote code similar to this (but not hardcoded of course):
<html>
<body>
<script>
fetch("http://acme.org:5984/mydb/323958d9b42be0aaf811a3c96b4e5d9c", {
method: 'DELETE',
headers: {'If-Match': "1-0c2099c9793c2f4bf3c9fd6751e34f95"}
}).then(x => {
if (x.ok) return x.json().then(console.log);
console.error(x.statusText);
});
</script>
</body>
</html>
While it works fine with GET and POST, we get 405 Method not allowed on DELETE. The browser tells that the preflight response (to the OPTIONS request) was not successful, while the server indicates {"error":"method_not_allowed","reason":"Only DELETE,GET,HEAD,POST,PUT,COPY allowed"}.
We tried both with CouchDB 2.1.1 and 1.6.1. We also tried to replace origins: * with origins: http://localhost:8080 (where 8080 is the port serving the HTML above). We tried also to set credentials to false.
From a comment by #Ingo Radatz to a related question, I finally got that EVERY header used in the request must be included in CouchDB CORS settings.
In my personal case, I had to include if-match in the accepted headers:
[HTTPD]
enable_cors = true
[CORS]
origins = *
methods = GET, PUT, POST, HEAD, DELETE
headers = accept, authorization, content-type, origin, referer, if-match

Perfect, swift, getting server address from request in handler

In request handler, processing e.g. GET https://example.com/collections/1 or POSThttp://0.0.0.0:8080/collections how do I get server address https://example.com and http://0.0.0.0:8080 respectively?
Currently I'm constructing it like so
var url = "\(httpPrefix)\(server.serverAddress)"
if server.serverPort != 443 { url += ":\(server.serverPort)" }
where httpPrefix is:
let httpPrefix = isLinux ? "https://" : "http://"
But it feels like there's a better way...
I've discovered that host header would contain either example.com or 0.0.0.0:8080 depending on the server.
So the following produces just the result I need. httpScheme is still hardcoded (which is something I don't like, but don't see other options yet).
httpScheme + request.header(.host)!
I'm force-unwrapping, as I observe host header is always there.

Wikipedia url stopped after 10 redirects error GoLang

Upon executing an HTTP Get request, I receive the following error:
2015/08/30 16:42:09 Get https://en.wikipedia.org/wiki/List_of_S%26P_500_companies:
stopped after 10 redirects
In the following code:
package main
import (
"net/http"
"log"
)
func main() {
response, err := http.Get("https://en.wikipedia.org/wiki/List_of_S%26P_500_companies")
if err != nil {
log.Fatal(err)
}
}
I know that according to the documentation,
// Get issues a GET to the specified URL. If the response is one of
// the following redirect codes, Get follows the redirect, up to a
// maximum of 10 redirects:
//
// 301 (Moved Permanently)
// 302 (Found)
// 303 (See Other)
// 307 (Temporary Redirect)
//
// An error is returned if there were too many redirects or if there
// was an HTTP protocol error. A non-2xx response doesn't cause an
// error.
I was hoping that somebody knows what the solution would be in this case. It seems rather odd that this simple url results in more than ten redirects. Makes me think that there may be more going on behind the scenes.
Thank you.
As others have pointed out, you should first give thought to why you are encountering so many HTTP redirects. Go's default policy of stopping at 10 redirects is reasonable. More than 10 redirects could mean you are in a redirect loop. That could be caused outside your code. It could be induced by something about your network configuration, proxy servers between you and the website, etc.
That said, if you really do need to change the default policy, you do not need to resort to editing the net/http source as someone suggested.
To change the default handling of redirects you will need to create a Client and set CheckRedirect.
For your reference:
http://golang.org/pkg/net/http/#Client
// If CheckRedirect is nil, the Client uses its default policy,
// which is to stop after 10 consecutive requests.
CheckRedirect func(req *Request, via []*Request) error
I had this issue with Wikipedia URLs containing %26 because they redirect to a version of the URL with & which Go then encodes to %26 which Wikipedia redirects to & and ...
Oddly, removing gcc-go (v1.4) from my Arch box and replacing it with go (v1.5) has fixed the problem.
I'm guessing this can be put down to the changes in net/http between v1.4 and v1.5 then.