Socket IO using Cloudflare gives 404s for /socket.io/?EIO=3&transport=polling&t=M8UDUNL - sockets

I am working on a node.js site that uses socket.io on ports 8443 and port 443. The site works locally as well as using namecheap for dns records pointing to the production server. I recently set the nameservers to Cloudflare and added the same A record there that I was using at namecheap. Now everything works except for the socket.io to port 443. Requests using port 443 that are not using socket.io are working fine.
I am getting this error:
GET https://<domain>/socket.io/?EIO=3&transport=polling&t=M8UDaUZ 404 ()
Port 8443 socket.io requests are getting 200 responses, and include an sid:
https://<domain>:8443/socket.io/?EIO=3&transport=polling&t=M8UDUTx&sid=cuq1LLdLLVSj2F4FAAAN
I am not sure if the sid missing on the port 443 requests indicates the problem. When I asked cloudflare support about it, their response was 404 means to check the path.
The only thing that has changed is the DNS so I don't think this can be a path problem or a code problem with the site. It seems like it has to be something Cloudflare is doing differently than namecheap for dns or socket.io connections.
I don't see any other errors. Does anyone know what the problem could be or how to fix it for Cloudflare?

Have you managed to solve the issue?
I was dealing with the same issue (or looked the same) and what helped me was setting sockets.io listen to HTTP server.
Server code:
var http = require('http');
var server = https.createServer(app).listen(80, function() {
console.log('Express HTTP server listening');
});
var io = require('socket.io').listen(server);
var secureServer = https.createServer(httpsOptions, app);
console.log('Express HTTPS server listening');
});
Client uses wss:// (so transport is secure).
Also I have CloudFlare option "proxy all HTTP to HTTPS" enabled.

Related

Keycloak is missing port in OpenID config response

These one seems odd. When I fetch the OpenID config via Postman or in the browser, I get a valid config response.
For example a GET via Postman or in the browser to
http://127.0.0.1:8080/auth/realms/myrealm/.well-known/openid-configuration
returns the endpoint including the port 8080 correctly:
{
snip
"jwks_uri": "http://127.0.0.1:8080/auth/realms/myrealm/protocol/openid-connect/certs"
snip
}
However, fetching the from the very same host, target, port, scheme (http) in my C++ application returns the confi endpoints all without a port (e.g. 8080 is missing)
{
snip
"jwks_uri":"http://127.0.0.1/auth/realms/myrealm/protocol/openid-connect/certs"
snip
}
I do not see any issue in my C++ client code, I'm not sure what's making the difference at all. For completeness, this is the C++ code I'm using next to actual values when sending the request, though that should not really be a matter of programming language used:
req_.version(version); // HTTP 1.1
req_.method(method); // GET
req_.target(target); // /auth/realms/myrealm/.well-known/openid-configuration
req_.set(http::field::host, host); // 127.0.0.1
static const std::string agent = app.myAgent();
req_.set(http::field::user_agent, agent);
req_.set(http::field::content_type, contentType); // application/json
I have two questions here:
1) What causes Keycloak not to add the port to the endpoints? And how to workaround it?
2) What's making the difference between the calls? This should be a vanilla GET request.
Figured it out. According to the HTTP spec the port must be included in the host name if it's not a default name:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.23
That resolved the issue. Though, my opinion is it should be fixed in Keycloak as well. I would expect static URLs and not something that changes depending on your request. For an example, you could set host to localhost during request and it will return "localhost" in the config, set it to 127.0.0.1 and it will return that. What will it return for an actual IP when queried locally, but the server has a public IP? I didn't try that but it seems odd enough.

Proxy requests to backend using h2c

Re-asking the question from HA-Proxy discourse site here in the hopes of getting more eyes on it.
I am using HA-Proxy version 1.9.4 2019/02/06 for proxying HTTP traffic to a h2c backend. I am however seeing HA-Proxy set the :scheme to https (and from as far as I can tell uses SSL in the request) when proxying the request. When I hit the backend directly, the :scheme is set to http and the request is non-SSL as expected. I have verified this HA-Proxy behavior using wireshark.
Any suggestions on what I should change in my configuration so that I can set to make sure that the :scheme gets set to http while proxying the request to the backend?
I am using curl 7.54.0 to make requests:
$ curl http://localhost:9090
where HA-Proxy is listening on port 9090.
My HA-Proxy config file:
global
maxconn 4096
daemon
defaults
log global
option http-use-htx
timeout connect 60s
timeout client 60s
timeout server 60s
frontend waiter
mode http
bind *:9090
default_backend local_node
backend local_node
mode http
server localhost localhost:8080 proto h2
It's not supported yet. The client=>haproxy connection can be HTTP/2, the haproxy=>server connection cannot.
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#1.1
HTTP/2 is only supported for incoming connections, not on connections
going to servers.
Just add proto h2 to the server definition.
Cite from example:
server server1 192.168.1.13:80 proto h2
It's an experimental feature of haproxy-1.9, you must enable option http-use-htx to use it.
option http-use-htx is enabled by default since haproxy-2.0-dev3.
This was reported as an issue in haproxy github and has been fixed in version 2.0.

Redirect to url using squid and squidGaurd

Trying to redirect a url to other using squidGaurd, its not working, can anybody help me. I'm using Ubuntu 16.04
Usecase: the squid is to redirect to http://localhost:3000 for www.abc.com
and works normal for all other urls. Tried many things on internet, not working for me. can Somebody help with some good tutorial or example?
Tarun,
The squidGuard redirection URL should be a valid IP address or FQDN accessible to the clients. In your case traffic may be correctly be correctly redirected to URL http://localhost:3000, but for the clients localhost points to its loopback IP address.
In this case suppose you have opened the URL www.abc.com on your machine, after the request hits Proxy Server your browser is redirected to http://localhost:3000 so effectively to port 3000 on your machine. Since your machine is not listening on port 3000 it will look like the URL redirection is not working. Please use the IP address or FQDN of the proxy host in place of localhost and it should work.
Shahnawaz

How do my browser knows it needs to connect to port 443 or port 80?

This is what I am trying to do:
Open a browser and start to browse any https website like Gmail or Google.com
I can see through Wireshark that the name resolution is being done by the DNS server.
But after that, the connection is directly established to port 443 (starting from TCP handshake)
One thing I am not able to understand is, how does the browser knows that it needs to connect to port 443, I tried exploring the DNS packet, but it contains only the destination address, and there is no info which tells that it needs to connect to port 443.
Even if say, the browser has a priority in querying for the first time, it sees that if the port 443 is open then connect to it or connect to port 80, but I am not able to see any such behavior if I connect to a normal HTTP website, in the sense that, if I go to a normal HTTP website, there is no traffic flow from the browser indicating that it had searched first the port 443 and then went to port 80.
I am sure that I am missing something here, but not sure what it is.
The presence of https: in the URL tells it that.
The browser (client) uses the HTTP or HTTPS in the address to determine which port to use...
However the server can be configured to require HTTPS, and to switch/redirect an HTTP port 80 connection to HTTPS port 443 with encryption & certificate. So if the browser connects to a server via HTTP port 80, the server can then immediately switch/redirect the connection to HTTPS port 443. The server may even be configured the other way around to switch/redirect a connection from HTTPS port 443 to HTTP port 80.
I think this is sort of like asking why does a FTP client use the FTP port
Unless you specify a port with "http://...:port" the browser uses 80 for http and 443 for https as thats what the protocol defines but....
A server may respond with a "Strict-Transport-Security: max-age=..." and the browser is then required to retry on https and remember this
In addition Chrome , see HSTS, ships with a large preseeded HSTS list
so even if you type http for a site in the HSTS list - the browser will look at its HSTS configuration see that the site is specified and instead change to HTTPS on port 443 without trying http on port 80 first

Using Jersey Client on port 443

I want to configure jersey client to use port 443 when connecting to a web resource. I attempted to hard code the port in the resource locator but the client resorts back to port 80. I think this works automatically when using the HTTPUrlConnection but with HttpClient it appears you have to manually configure it.
Can soemone suggest how i might do this?
FYI - I have already tried this with httpClient Credential provider
httpClient.getCredentialsProvider().setCredentials(new AuthScope(null, 443,null, "https"), creds);
And also
Scheme schemeHttps = new Scheme("https", SSLSocketFactory
.getSocketFactory(), 443);
client.getConnectionManager().getSchemeRegistry().register(schemeHttps);
Thanks.
This turned out to be an issue with the proxy settings set in my Eclipse IDE. The IIS server could not recognize the "localhost" address so when i changed it to 127.0.0.1 it worked..