I got my own cluster which has a control-plane and worker. i trying to connect to PostgreSQL in the cluster using psql -h <ingress-url> -p 5432 -U postgres -W, but it occur an error:
psql: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
But curl <ingress-url> response like this:
2022-05-27 04:00:50.406 UTC [208] LOG: invalid length of startup packet
That response mean my request have reached to PostgreSQL Server, but why i cannot connect to my PostgreSQL Server?
Here are my resources:
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
service:
name: preflight
port:
number: 5432
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
replicas: 1
selector:
matchLabels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
template:
metadata:
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
containers:
- name: preflight
image: postgres:14
env:
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: preflight
key: postgresPassword
ports:
- containerPort: 5432
Service:
apiVersion: v1
kind: Service
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
type: ClusterIP
selector:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
ports:
- port: 5432
targetPort: 5432
ConfigMap values is a POSTGRES_PASSWORD=postgres.
Nginx ingress controller is an HTTP proxy. You are trying to route PGSQL traffic over HTTP, and that simply can't work.
What you need to do is expose a TCP service through nginx ingress controller. See this page.
In a nutshell, you need to create a configmap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: <namespace where you deployed ingress controller>
data:
5432: "default/preflight:5432"
then ensure your nginx ingress controller starts with the --tcp-services-configmap=tcp-services flag.
Finally, ensure the nginx ingress controller Service (the one with type == LoadBalancer) exposes port 5432:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
...
spec:
type: LoadBalancer
ports:
...
- name: pgsql
port: 5432
targetPort: 5432
protocol: TCP
...
Please note that your provider's Load Balancer should support TCP backends (most should do, but worth mentioning).
Related
I have setup MinIO in kubernetes (k3s) - one node implementation.
Services
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: minio
labels:
app: minio
spec:
clusterIP: None
selector:
app: minio
ports:
- port: 9011
name: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: minio
labels:
app: minio
spec:
type: LoadBalancer
selector:
app: minio
ports:
- port: 9012
targetPort: 9011
protocol: TCP
Sateful Set
[. . .]
containers:
- name: ches
image: minio/minio
args:
- server
- /data
[. . .]
- containerPort: 9000
hostPort: 9011
[. . .]
Command kubectl logs minio-0 -n minio returns the following:
API: http://10.42.0.14:9000 http://127.0.0.1:9000
Console: http://10.42.0.14:41989 http://127.0.0.1:41989
I am trying to setup Ingress. The steps that followed are:
Setup an Ingress Controller
From here: https://kubernetes.github.io/ingress-nginx/deploy/
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Create an Internal Service
apiVersion: v1
kind: Service
metadata:
name: minio-service-ingress
namespace: minio
labels:
app: minio
spec:
selector:
app: minio
ports:
- port: 9011
name: minio
Create Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio-ingress
namespace: minio
spec:
rules:
- host: minio.com
http:
paths:
- backend:
service:
name: minio-service-ingress
port:
number: 9011
path: /
pathType: Prefix
When executing kubectl get ing -n minio:
NAME CLASS HOSTS ADDRESS PORTS AGE
minio-ingress <none> minio.com 192.168.1.14 80 43m
In /etc/hosts I added the entry:
192.168.1.14 minio.com
However, when I am trying to enter http://minio.com/ in browser I get:
Bad Gateway
Am I missing something here?
I created forward proxy server on EKS pods behind ALB(created by AWS Load Balancer Controller). All pod can take a response through 8118 port through ALB.
The resources like pod and ingress looked good to me. Then I tried if the proxy server work well with curl -Lx k8s-proxy-sample-domain.ap-uswest-1.elb.amazonaws.com:18118 ipinfo.io
Normally, I get random ip address from ipinfo.io. But it didn't.... So, I also did port-forad. Like this:
kubectl port-forward specifi-pod 8118:8118
Then I re-try redirect access on my host address.
curl -Lx localhost:8118 ipinfo.io
In this case, it went well. I cannot catch the reason. What's the difference between THROUGH ALB and port-forward. Should I use NLB for some reason? Or some misconfigure?
My environement
k8s version: v1.18.2
node type: fargate
Manifest
Here is my manifest.
---
apiVersion: v1
kind: Namespace
metadata:
name: tor-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: tor-proxy
name: tor-proxy-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: tor-proxy
replicas: 5
template:
metadata:
labels:
app.kubernetes.io/name: tor-proxy
spec:
containers:
- image: dperson/torproxy
imagePullPolicy: Always
name: tor-proxy
ports:
- containerPort: 8118
---
apiVersion: v1
kind: Service
metadata:
labels:
name: tor-proxy
name: tor-proxy-service
namespace: tor-proxy
spec:
ports:
- port: 18118
targetPort: 8118
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: tor-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: tor-proxy
name: tor-proxy-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 18118}]'
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: tor-proxy-service
servicePort: 18118
Use NLB not ALB, because it pass the client IP toward a target site through proxy server.
Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress
Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system
The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.
I configure my services with type ClusterIP. And I want to make them communicated.
Service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-backend-deployment
name: app-backend
spec:
type: ClusterIP
ports:
- port: 8020
protocol: TCP
targetPort: 8100
selector:
app: app-backend
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-backend
name: app-backend-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app-backend
template:
metadata:
labels:
app: app-backend
spec:
containers:
- name: app-backend
image: app-backend
ports:
- containerPort: 8100
imagePullPolicy: Never
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: backend-conf # name of configMap
data:
BACKEND_SERVICE_HOST: app-backend:8020
And that is what I pass to the frontend service, and I want to make a REST call through the DNS name for example http://app-backend:8020/get/1. But like I see in the console app cannot resolve DNS name net::ERR_NAME_NOT_RESOLVED.
I also check pod nslookup:
busybox nslookup app-backend.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: app-backend.default.svc.cluster.local
Address: 10.106.41.36
And compare it to
kubectl describe svc app-backend
Name: app-backend
Namespace: default
Labels: app=app-backend-deployment
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"...
Selector: app=app-backend
Type: ClusterIP
IP: 10.106.41.36
Port: <unset> 8020/TCP
TargetPort: 8100/TCP
And like you can see there is the same IP on Address but I don't know and where to look what is wrong why dns resolver doesn't work. kubectl version Client "v1.15.5", Server Version:"v1.17.3",
Because of that frontend service that was served to the local machine (that how Angular works) REST request cannot go through Kubernetes DNS with another backend service. I need to communicate them through the Ingress. Due to different annotations, I have to use 2 Ingress. Meaby there is a better way to use just one, but when I want to use only one Ingress I can't find a way to make them both working, with the same annotation.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: app-backend-ingress
spec:
rules:
- host: app.io
http:
paths:
- path: /api(/|$)(.*)
backend:
serviceName: app-backend
servicePort: 8020
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-frontend-ingress
spec:
rules:
- host: app.io
http:
paths:
- path: /
backend:
serviceName: app-frontend
servicePort: 80
I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.
I have been following this guide : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
My deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
the service of those deployments:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
My Load balancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
I've set up the Nginx Ingress Controller with this : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...
Every resources create correctly, and I think the ingress is actually working but is getting confused where to go.
Thanks for your help!
In general, use externalTrafficPolicy: Cluster instead of Local. You can gain some performance (latency) improvement by using Local but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, Cluster is the default option for externalTrafficPolicy.
In your ingress, you route /bleble to service bleble, but your service name is actually bleble-svc. please make them consistent. Also, you would need to set your servicePort to 8080 as you exposed 8080 in your service configuration.
For internal service like bleble-svc, Cluster IP is good enough in your case as it does not need external access.
Hope this helps.
Found it!
The containerPort in the Deployment were set to 8000, the targetport of the services as well, but the person who did the Dockerfile of the code exposed the port 80. Which was the reason it was getting the 502 Bad getaway!
Thanks a lot as well to #Fei who has been a fantastic helper!