I am trying to send(copy) all the nginx traffic to Unix Socket.
Here is the related code from my nginx.conf
upstream unixsocket { server unix:/var/www/tmp2.sock; }
post_action /sendLogging
location /sendLogging
{
proxy_pass http://unixsocket;
}
should i start a server in this socket ?
socket -sl /var/www/tmp2.sock
If i do this, i cant see any requests coming to the socket.
Moreover because of the config, my nginx is taking huge CPU 50-90% Just while testing with ONE request.
--
EDIT:
My mistake, the sock file was not writable by NGINX worker process. Gave appropriate permissions.
The reason for high CPU usage was because of the POST_ACTION tag there was an internal redirect.
If anyone else faces INTERNAL REDIRECT issue with POST_ACTION i solved by returning 444 from the location. this works in my case.
Thanks.
Related
I have an issue with my kubernetes routing.
The issue is that one of the pods makes a GET request to auth.domain.com:443 but the internal routing is directing it to auth.domain.com:8443 which is the container port.
Because the host returning the SSL negotiation identifies itself as auth.domain.com:8443 instead of auth.domain.com:443 the connection times out.
[2023/01/16 18:03:45] [provider.go:55] Performing OIDC Discovery...
[2023/01/16 18:03:55] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://auth.domain.com/realms/master/.well-known/openid-configuration": net/http: TLS handshake timeout
(If someone knows the root cause of why it is not identifying itself with the correct port 443 but instead the container port 8443, that would be extremely helpful as I could fix the root cause.)
To workaround this issue, I have the idea to force it to route out of the pod onto the internet and then back into the cluster.
I tested this by setting up the file I am trying to GET on a host external to the cluster, and in this case the SSL negoiation works fine and the GET request succeeds. However, I need to server the file from within the cluster, so this isn't a viable option.
However, if I can somehow force the pod to route through the internet, I believe it would work. I am having trouble with this though, because everytime the pod looks up auth.domain.com it sees that it is an internal kubernetes IP, and it rewrites the routing so that it is routed locally to the 10.0.0.0/24 address. After doing this, it seems to always return with auth.domain.com:8443 with the wrong port.
If I could force the pod to route through the full publicly routable IP, I believe it would work as it would come back with the external facing auth.domain.com:443 with the correct 443 port.
Anyone have any ideas on how I can achieve this or how to fix the server from identifying itself with the wrong auth.domain.com:8443 port instead of auth.domain.com:443 causing the SSL negotiation to fail?
I'm looking for a solution to dispatch requests with nginx to optimize network connection bandwith of main server (then it should dispatch download requests to some other servers).
Here is an extract of nginx sample to perform load balacing:
upstream mystream {
server ip1:port1;
server ip2:port2;
}
server {
listen myport;
location / {
proxy_pass http://mystream;
}
}
The problem in this sample is that main server looks acting as a proxy of background servers and then not redirecting client. (it is providing file itself and then not saving bandwith).
Is there a way to configure nginx to dispatch download requests to background servers without acting as a proxy. (keep URL might be nice, but I'm open to rewrite it if needed).
Thanks
I finally found that split_clients is the best solution for my case as goal was to redirect clients to various download sites without any specific rule.
Note that this is changing URL so client will see the server URL (not important in my case).
With this solution, client asking server:myport/abcd will be redirected to serverx:portx/abcd based on MurmurHash2, see http://nginx.org/en/docs/http/ngx_http_split_clients_module.html
split_clients "${remote_addr}" $destination {
40% server1:port1;
30% server2:port2
20% server3:port3;
10% server4:port4
}
server {
listen myport;
location / {
return 302 http://$destination$request_uri;
}
}
Update
If you want to manage unique URL and background servers directly replying to client without any URL dispatch, you can configure load balancing using Linux Virtual Servers in Direct Routing mode.
To configure it, you can manage a Director VM & several "real servers" to which requests are dispatched transparently. See http://www.linuxvirtualserver.org/VS-DRouting.html
That's just how reverse proxying works:
A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as if they originated from the Web server itself.
One possible solution is to configure your upstream servers to serve traffic to the public and then redirect your clients there.
the nginx.conf:
server {
listen 8080;
}
server {
listen 80;
server_name localhost;
location / {
root /test/public;
index index.html index.htm;
}
location /api {
proxy_pass http://localhost:8080;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
the request and reponse headers are almost plain, no auth/session/cache parameters involved.
For same uri, first request will return successfully, while second will return 404, and so on.
I've tried disabling proxy buffering, but has no effect.
I'm 99.99% sure you have IPv6 enabled. In that case localhost resolves into two IP addresses 127.0.0.1 and [::1] and nginx balancing requests between them.
http://nginx.org/r/proxy_pass:
If a domain name resolves to several addresses, all of them will be used in a round-robin fashion.
On the other hand, you have listen 8080; directive that tends to listens only to IPv4 addresses (depends on OS, nginx version and other environment).
You could solve you problem in several ways:
use explicit IPv4 address proxy_pass http://127.0.0.1:8080;
use explicit IPv4 and IPv6 listen listen [::]:8080 ipv6only=off;
I observed the same problem in a docker enviroment. But the reason was independed from nginx. I just made a stupid copy-paste-mistake.
The setting:
I deployed the docker container by several docker-compose files. So I have following structure:
API-Gateway-Container based on nginx which references to
Webserver 1 based on nginx and
Webserver 2 based on nginx
Each of them has its own docker and docker-compose file. Because the structure of the compose-files for Webserver1 and Webserver2 is very similiar, I copied it and replaced the container name and some other stuff. So far so good. Starting and stopping the containers was no problem, watching them by docker container ls shows no abnormality. Accessing Webserver1 and Webserver2 by http://localhost:<Portnumber for server> was no problem, but accessing Webserver1 through the api gateway leads to alternating 200 and 404 responses, while Webserver2 works well.
After days of debugging I found the problem: As I mentioned I copied the docker-compose file from Webserver1 for Webserver2 and while I replaced the container name, I forgot to replace the service name. My docker-compose file starts like
version: '3'
services:
webserver1:
image: 'nginx:latest'
container_name: webserver2
...
This constellation also leads to the described behavior.
Hope, someone can save some days or hours by reading this post ;-)
André
Well. In my case, the problem was pretty straightforward. What was happening was, I had about 15 server blocks, and the port that I setup for my nodejs proxy_pass was already being used on some old server block hiding in my enabled servers directory. So nginx randomly was proxy passing to the old server which was not running and the one I just started.
So I just greped for the port number in the directory and found 2 instances. Changed my port number and the problem was fixed.
I've setup an example project that uses the latest version of nginx which supports HTTP/2.
I was going off this official blog post: https://www.nginx.com/blog/nginx-1-9-5/
Here is a working code example (with details of how to setup everything within the README - nginx.conf pasted below as well): https://github.com/Integralist/Docker-Examples/tree/master/Nginx-HTTP2
user nobody nogroup;
worker_processes auto;
events {
worker_connections 512;
}
http {
upstream app {
server app:4567;
}
server {
listen *:80 http2;
location /app/ {
return 301 https://$host$request_uri;
}
}
server {
listen *:443 ssl http2;
server_name integralist.com;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_trusted_certificate /etc/nginx/certs/ca.crt;
location /app/ {
proxy_pass http://app/;
}
}
}
Although the example works, I've hit an issue where by if I go to the service endpoint within my browser using HTTP, it'll first download a file called download and then redirect correctly to HTTPS.
I'm not sure what this file is or why the redirection causes it to happen, but its content is ġˇˇˇˇˇˇ ?
If I try using curl (e.g. curl --insecure http://$dev_ip:$dev_80/app/foo) the redirect fails to happen and I think it's because of this weird downloaded file? The response from curl to stdout is just ??????
I wonder if this is possibly a side-effect of using Docker to containerize the Ruby application and the nginx server?
Update
I removed http2 from listen *:80 http2; so it now reads listen *:80; and the download doesn't happen but the problem I have is trying to get the redirect to point to the correct docker port now :-/
To clarify, I have an nginx container with dynamic port generation (-P). One port for accessing the containerized nginx service on :80 and one for :443 - my nginx.conf is redirecting traffic from HTTP to HTTPS but I need to be able to identify the 443 port number.
e.g. docker ps shows 0.0.0.0:32791->80/tcp, 0.0.0.0:32790->443/tcp
I'm trying to redirect http://example.com:32791/app/foo to https://example.com:32790/app/foo
I'm not sure how I could configure nginx to know the relevant 443 Docker port number as that's dynamically generated?
I could be explicit and use -p when running the nginx container. But I'd still need to pass that port number into nginx as a variable somehow (a quick google would suggest using Docker's -e "my_port=9999" and then access it using nginx's env declaration)
Update 2
I've even tried swapping to explicit ports -p and then hardcoding the port number into the nginx.conf with no luck (e.g. docker run ... -p 60443:443 -p 60080:80)...
listen *:80;
location /app/ {
return 301 https://$host:60443$request_uri;
}
...if I hit http://example.com:60080/app/ it gets redirected to https://example.com:60443/ so almost there but the actual path /app/ wasn't added to the end when redirecting?
the problem I have is trying to get the redirect to point to the correct docker port now
If your nginx docker container is linked (docker run --link) with the ruby one, the correct port is the one EXPOSE(d) in the ruby container image (even if you don't know to which host port that EXPOSE port was mapped)
A direct link container-to-container can use the port mentioned in the dockerfile.
We are using nginx proxy_pass feature for bridging RESTful calls to a backend app, and use nginx web socket proxy for the same system at eh same time. Sometimes (guess when the system has no client request for a while) the nginx freezes any request till we restart it and then anything works well. What is the problem? DO I have to change keep-alive settings? I have turned off buffer and cache feature for proxy in nginx.conf.
I found the problem. By checking nginx error log and a bit a hackery sniff and guess, I found out that the web socket connections usually disconnect and reconnect (mobile devices) and the nginx peer tries to keep the connection alive, and then maximum connection limit reaches. I just decreased timeouts and increased max connections.