Kubernetes: using container as proxy - kubernetes

I have the following pod setup:
apiVersion: v1
kind: Pod
metadata:
name: proxy-test
namespace: test
spec:
containers:
- name: container-a
image: <Image>
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8083
- name: container-proxy
image: <Image>
ports:
- name: server
containerPort: 7487
protocol: TCP
- name: container-b
image: <Image>
I exec into container-b and execute following curl request:
curl --proxy localhost:7487 -X POST http://localhost:8083/
Due to some reason, http://localhost:8083/ is directly getting called and proxy is ignored. Can someone explain why this can happen ?

Environment
I replicated the scenario on kubeadm and GCP GKE kubernetes clusters to see if there is any difference - no, they behave the same, so I assume AWS EKS should behave the same too.
I created a pod with 3 containers within:
apiVersion: v1
kind: Pod
metadata:
name: proxy-pod
spec:
containers:
- image: ubuntu # client where connection will go from
name: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: proxy-container # proxy - that's obvious
image: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: server # regular nginx server which listens to port 80
image: nginx
For this test stand I installed squid proxy on proxy-container (what is squid and how to install it). By default it listens to port 3128.
As well as curl was installed on ubuntu - client container. (net-tools package as a bonus, it has netstat).
Tests
Note!
I used 127.0.0.1 instead of localhost because squid has some resolving questions, didn't find an easy/fast solution.
curl is used with -v flag for verbosity.
We have proxy on 3128 and nginx on 80 within the pod:
# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3128 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
curl directly:
# curl 127.0.0.1 -vI
* Trying 127.0.0.1:80... # connection goes directly to port 80 which is expected
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
curl via proxy:
# curl --proxy 127.0.0.1:3128 127.0.0.1:80 -vI
* Trying 127.0.0.1:3128... # connecting to proxy!
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connected to proxy
> HEAD http://127.0.0.1:80/ HTTP/1.1 # going further to nginx on `80`
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
squid logs:
# cat /var/log/squid/access.log
1635161756.048 1 127.0.0.1 TCP_MISS/200 958 GET http://127.0.0.1/ - HIER_DIRECT/127.0.0.1 text/html
1635163617.361 0 127.0.0.1 TCP_MEM_HIT/200 352 HEAD http://127.0.0.1/ - HIER_NONE/- text/html
NO_PROXY
NO_PROXY environment variable might be set up, however by default it's empty.
I added it manually:
# export NO_PROXY=127.0.0.1
# printenv | grep -i proxy
NO_PROXY=127.0.0.1
Now curl request via proxy will look like:
# curl --proxy 127.0.0.1:3128 127.0.0.1 -vI
* Uses proxy env variable NO_PROXY == '127.0.0.1' # curl detects NO_PROXY envvar
* Trying 127.0.0.1:80... # and ignores the proxy, connection goes directly
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
It's possible to override NO_PROXY envvar while executing curl command with --noproxy flag.
--noproxy no-proxy-list
Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is a single *
character, which matches all hosts, and effectively disables the
proxy. Each name in this list is matched as either a domain which
contains the hostname, or the hostname itself. For example, local.com
would match local.com, local.com:80, and www.local.com, but not
www.notlocal.com. (Added in 7.19.4).
Example:
# curl --proxy 127.0.0.1:3128 --noproxy "" 127.0.0.1 -vI
* Trying 127.0.0.1:3128... # connecting to proxy as it was supposed to
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connection to proxy is established
> HEAD http://127.0.0.1/ HTTP/1.1 # connection to nginx on port 80
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
This proves that proxy works! with localhost.
Another option is something incorrectly configured in proxy which is used in the question. You can get this pod and install squid and curl into both containers and try yourself.

Related

K8s Liveness Probe is keeps failing, but CURL from Pod is working

I am having a strange issue with the Liveness Probe constantly failing but connecting into the Pod and checking the endpoint with cURL looks good.
Here is the output of the CURL command.
curl -v localhost:7000/health
* Expire in 0 ms for 6 (transfer 0x5595637270f0)
...
* Expire in 0 ms for 1 (transfer 0x5595637270f0)
* Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x5595637270f0)
* Expire in 200 ms for 4 (transfer 0x5595637270f0)
* Connected to localhost (127.0.0.1) port 7000 (#0)
> GET /health HTTP/1.1
> Host: localhost:7000
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: gunicorn
< Date: Mon, 13 Feb 2023 19:41:44 GMT
< Connection: close
< Content-Type: text/html; charset=utf-8
< Content-Length: 24
<
* Closing connection 0
Now here is the section of the YAML that has the probe for the Pod:
containers:
- name: flask-container
image: path
imagePullPolicy: Always
volumeMounts:
- name: cert-and-key
mountPath: /etc/certs
readOnly: true
ports:
- containerPort: 7000
livenessProbe:
httpGet:
path: /health
port: 7000
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 20
imagePullSecrets:
- name: pullsecret
For some reason the Liveness probe keeps failing after creating the Pod:
Liveness probe failed: Get "http://10.224.0.130:7000/health": dial tcp 10.224.0.130:7000: connect: connection refused
Thanks in advance for any pointers!
Fixed. The issue was with the Docker image exposing the container on localhost or 127.0.0.1 instead of correctly exposing on 0.0.0.0.

Keycloak redirecting to Hostname but with port number too

I have configured nginx and given hostname to keycloak as http://keycloak.formsflow.ai for localhost:8080, but as I see in redirection url it show port number 8080, how can I remove it?
Keycloak showing port number in redirection along with hostname
Below is my docker config for keycloak
keycloak:
image: quay.io/keycloak/keycloak:14.0.0
container_name: keycloak
volumes:
- ./configuration/imports:/opt/jboss/keycloak/imports
command:
- "-b 0.0.0.0 -bmanagement=0.0.0.0 -Dkeycloak.import=/opt/jboss/keycloak/imports/formsflow-ai-realm.json -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=keycloak-db
- KEYCLOAK_HOSTNAME=keycloak.formsflow.ai
- DB_DATABASE=${KEYCLOAK_JDBC_DB:-keycloak}
- DB_USER=${KEYCLOAK_JDBC_USER:-admin}
- DB_PASSWORD=${KEYCLOAK_JDBC_PASSWORD:-changeme}
- KEYCLOAK_USER=${KEYCLOAK_ADMIN_USERNAME:-admin}
- KEYCLOAK_PASSWORD=${KEYCLOAK_ADMIN_PASSWORD:-changeme}
ports:
- 8080:8080
config to set for keycloak to remove port number from redirection url
When behind a reverse proxy, configure Keycloak properties:
PROXY_ADDRESS_FORWARDING=true
KEYCLOAK_FRONTEND_URL=http://keycloak.formsflow.ai/auth
You may also need to configure headers X-Forwarded-Proto and X-Forwarded-Host in Nginx.

Docker-compose unable to make cross container requests

I'm having network issues running services in docker-compose. Essentially I'm just trying to make a get request through Kong to a simple Flask API I have setup. The docker-compose.yml is below
version: "3.0"
services:
postgres:
image: postgres:9.4
container_name: kong-database
ports:
- "5432:5432"
environment:
- POSTGRES_USER=kong
- POSTGRES_DB=kong
web:
image: kong:latest
container_name: kong
environment:
- DATABASE=postgres
- KONG_PG_HOST=postgres
restart: always
ports:
- "8000:8000"
- "443:8443"
- "8001:8001"
- "7946:7946"
- "7946:7946/udp"
links:
- postgres
ui:
image: pgbi/kong-dashboard
container_name: kong-dashboard
ports:
- "8080:8080"
employeedb:
build: test-api/
restart: always
ports:
- "5002:5002"
I add the API to kong with the command curl -i -X POST --url http://localhost:8001/apis/ --data name=employeedb --data upstream_url=http://localhost:5002 --data hosts=employeedb --data uris=/employees. I've tried this with many combinations of inputs, including different names, passing in the Docker network IP and the name of the test-api as hostname for the upstreamurl. After adding the API to Kong I get
HTTP/1.1 502 Bad Gateway
Date: Tue, 11 Jul 2017 14:17:17 GMT
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: kong/0.10.3
Additionally I've gotten into the docker containers running docker exec it <container-id> /bin/bash and attempted to make curl requests to the expected flask endpoint. While on the container running the API I was able to make a sucessful call to both localhost:5002/employees as well as to employeedb:5002/employees. However when making it from the container running Kong I see
curl -iv -X GET --url 'http://employeedb:5002/employees'
* About to connect() to employeedb port 5002 (#0)
* Trying X.X.X.X...
* Connection refused
* Failed connect to employeedb:5002; Connection refused
* Closing connection 0
Am I missing some sort of configuration that exposes the containers to one another?
You need to make the employeedb container visible to kong by defining a link like you did with the PostgreSQL database. Just add it as an additional entry directly below - postgres and it should be reachable by Kong:
....
links:
- postgres
- employeedb
....

Connect nginx on host with wsgi unicorn inside docker container

Starting to dockerize my Rails application I am facing following problem:
My idea was to have every web application with their Wsgi and dependencies running in an extra docker container and the database also ruiing in seperate containers while using docker-compose to set it up.
Outside the containers Nginx is routing traffic then depending on the domain to the specific container via unix sockets.(Didn't want nginx in a container to reduce the complexity and avoid having multiple nginx running in multiple containers to maintain multiple webapps).
Before starting with docker my wsgi and nginx got connected via unix sockets. But after dockerizing this is not working anymore. Only connecting them with ports works now which I would like to avoid.
Is there anyway way to connect Nginx on the host via unix sockets with the WSGI inside a container? If not what is best practice here?
My approach was to use shared volumes as location for the socket file but nginx cant access the socket created by the wsgi unicorn:
Socket created by unicorn:
srwxrwxrwx 1 root root 0 Nov 14 14:53 unicorn.sock=
Nginx error:
*2 connect() to unix:/ruby-webapps/myapp/shared/sockets/unicorn.sock failed (13: Permission denied) while connecting to upstream
Nginx sites-avaible/myapp:
upstream myapp {
# Path to Unicorn SOCK file, as defined previously
server unix:/ruby-webapps/myapp/shared/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80 default_server;
...
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name myapp.de www.myapp.de;
root /ruby-webapps/myapp;
try_files $uri/index.html $uri #MyApp;
location #MyApp {
proxy_pass http://myapp;
#proxy_pass http://127.0.0.1:3000;
proxy_set_header X-Forwarded-For https;
proxy_redirect off;
}
}
docker-compose.yml:
version:'2'
services:
postgresmyapp:
image: postgres
env_file: .env
myapp:
build: .
env_file: .env
command: supervisord -c /myapp/unicorn_supervisord.conf
volumes:
- .:/myapp
ports:
- "3000:3000"
links:
- postgreslberg
config/unicorn.rb:
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
rails_env = ENV['RAILS_ENV'] || 'production'
# Set unicorn options
worker_processes 2
preload_app true
timeout 30
# Set up socket location
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
#listen(3000, backlog: 64)
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"
pid "#{shared_dir}/pids/unicorn.pid"

rolling deployment for docker containers behind load balancer

I have a problem with rolling deployments of docker containers behind a load balancer.
Here is my docker compose yml file contents.
nginx:
image: nginx_image
links:
- node1:node1
- node2:node2
- node3:node3
ports:
- "80:80"
node1:
image: nodeapi_image
ports:
- "8001"
node2:
image: nodeapi_image
ports:
- "8001"
node3:
image: nodeapi_image
ports:
- "8001"
and here my nginx.conf
worker_processes 4;
events { worker_connections 1024; }
http {
upstream node-app {
least_conn;
server node1:8001 weight=10 max_fails=3 fail_timeout=30s;
server node2:8001 weight=10 max_fails=3 fail_timeout=30s;
server node3:8001 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
listen 443 ssl;
# ssl on;
ssl_certificate /etc/nginx/ssl/imago.io.chain.crt;
ssl_certificate_key /etc/nginx/ssl/imago.io.key;
location / {
proxy_pass http://node-app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
If I have a new built image I want to deploy I have to stop a node container, remove it and recreate it with the new image. The problem here is that the new container will get a new IP and the nginx container doesnt know about that new IP, so if I recreate the 3 containers behind the load balancer once I recreate the last one the app wont serve any more because all IPs in the nginx machines /etc/hosts and environment vairables are not up to date any more.
I could SSH in to each container, update its code by pulling from the git repo and restart the process but that seems just wrong to me. What is the right way to do this?
There is an easier way to achieve this, take the following docker-compose.yml file as an example :
lb:
image: tutum/haproxy
links:
- app
ports:
- "80:80"
app:
image: tutum/hello-world
This docker compose file describes two services :
lb: a load balancer which uses the tutum/haproxy image
app: a sample webapp listening on port 80
If you start those services naïvely with docker-compose up -d, you will end up with only 2 containers (the load balancer and the web app).
But if you run docker-compose scale app=3 then run again docker-compose up -d, you will end up with 4 load-balanced containers.
The key player here is the tutum/haproxy docker image which is able to discover the different containers it is linked to.
A similar solution is to use Jason Wilder's nginx-proxy image which has the advantage of discovering the new nodes live ; so you won't have to restart the lb service.
lb:
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
app:
image: tutum/hello-world
environment:
VIRTUAL_HOST: www.mysite.com
The VIRTUAL_HOST environment variable must be set to the domain name that resolves to the IP address of your docker host.
Another one is to use Traefik
lb:
image: traefik
command: --docker
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app:
image: tutum/hello-world
labels:
traefik.frontend.rule: Host:www.mysite.com
The traefik.frontend.rule label must define a Traefik rule set to the domain name that resolves to the IP address of your docker host.
Traefik also offers different load balancing strategies and circuit breakers.