DigitalOcean Loadbalancer behavior Kubernetes TCP 443 - kubernetes

Currently I've this Load Balancer Service on my Kubernetes Cluster.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP [HIDDEN] <none> 443/TCP 44h
load-balancer LoadBalancer [HIDDEN] [HIDDEN] 443:30014/TCP 39h
This is my .yaml file config
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
selector:
app: nodeapp
type: LoadBalancer
ports:
- protocol: TCP
port: 443
targetPort: 3000
name: https
For some reason DigitalOcean does not setup the HTTPS instead if leaves it as TCP 443. And then I've to manually go to DigitalOcean and change TCP to HTTPS and create the let's encrypt certificate. How can I make Kubernetes create a load balancer using HTTPS on port 443 instead of TCP 443.

According with their documentation you need to add additional annotations like that:
---
kind: Service
apiVersion: v1
metadata:
name: https-with-cert
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
How to add SSL certificate: https://www.digitalocean.com/docs/networking/load-balancers/how-to/custom-ssl-cert/

A Service of type Load Balancer will create a Layer 4 type LB(Network LB), with awareness of only the IP and Port.
You will need Layer 7 LB(Application LB), that's application aware.
So, to enable HTTPS, you will need to manage via an ingress.

Related

kubernetes service not reachable

I have services created using
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: kube-system
spec:
selector:
app: traefik
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
- name: websecure
protocol: TCP
port: 443
targetPort: 443
- name: admin
protocol: TCP
port: 8080
targetPort: 8080
Here is the service
% kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
traefik ClusterIP 172.20.89.154 <none> 80/TCP,443/TCP,4080/TCP,4043/TCP,8080/TCP
When I try to connect to this service from same or different namespace, I get
curl: (6) Could not resolve host: treafik.kube-system.svc.cluster.local
But if I do using Cluster IP 172.20.89.154:8080, everything works. I am able to use service name for other services and it works.
What is wrong in this setup ?

How to maintain udp session within a pod in kubernetes?

When receive udp packets, I want to get the source IP and source Port from the packet, and expect that doesn't change if the packet is from the same source(same IP and same Port). My packet is sent through kube-proxy in iptables mode, but when my packets paused several seconds, and then the source port would change, and set sessionAffinity to "ClientIP" doesn't work. It seems that udp session can only be kept several seconds. Any way to expand the session time, or keep the port stay the same when my packet sender's ip and port haven't changed?
This is a community wiki answer. Feel free to expand it.
As already mentioned in the comments, you can try to use the NGINX Ingress Controller. The Exposing TCP and UDP services says:
Ingress does not support TCP or UDP services. For this reason this
Ingress controller uses the flags --tcp-services-configmap and
--udp-services-configmap to point to an existing config map where
the key is the external port to use and the value indicates the
service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
The example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53:
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

Using the external IP service of the load balancer for different pod Kubernetes

I have deployed RabbitMQ in Kubernetes using a service with the load balancer type. When creating a service, an external IP is created. Could you please tell me if I can bind another deployment to this IP with other ports? Thanks.
It is possible, you just have to create a service with multiple ports, for example:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: any other port
port: <port-number>
targetPort: <target-port>
selector:
app: app
type: LoadBalancer
And you will output similar to this:
$kubectl get svc
service-name LoadBalancer <Internal-IP> <External-IP> 80:30870/TCP,443:32602/TCP,<other-port>:32388/TCP

Cluster IP service isn't working as expected

I created nodeport service for httpd pods and cluster IP service for tomcat pods and they're in same namespace behind nginx LB. There is a weird issue with the app when http and tomcat services are not the same type. When I change both to be cluster IP or both to be NodePort then everything works fine...
Traffic flow is like this:
HTTP and HTTPS traffic -> LB -> Ingress -> Httpd -> Tomcat
HTTPS virtual host custom port traffic -> LB -> Tomcat
TCP traffic -> LB -> Tomcat
Is there anything that can cause issues between HTTPD and Tomcat? Even though I can telnet to httpd and tomcat pods from outside but for some reason the app functionality breaks (some static and jsp pages gets processed though).
httpd-service:
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
type: NodePort
selector:
app: httpd
ports:
- name: port-80
port: 80
protocol: TCP
targetPort: 80
- name: port-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
externalTrafficPolicy: Local
tocmat-service:
apiVersion: v1
kind: Service
metadata:
name: tomcat7
namespace: test-web-dev
annotations:
spec:
selector:
app: tomcat7 # Metadata label of the deployemnt pod template or pod metadata label
ports:
- name: port-8080 # Optional when its just only one port
protocol: TCP
port: 8080
targetPort: 8080
- name: port-8262
protocol: TCP
port: 8262
targetPort: 8262
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ingress lb:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
8262: "test-web-dev/tomcat7:8262"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-8262
port: 8262
protocol: TCP
targetPort: 8262
Answering my own question.
NodePort services are required when the service needs be exposed outside of the cluster like internet.
ClusterIP services are used when services needs to communicate internally like frontend to backend.
In my case, user needs to connect to both httpd and tomcat (specific app port) from outside as a result both tomcat and httpd has to be nodeport type service. Configuring tomcat has cluster IP will break the app since tomcat app port isn't reachable from internet.

Gloo gateway-proxy External IP on GKE : telnet: Unable to connect to remote host: Connection refused

I deploy nifi and gloo API Gateway on the same GKE cluster. The external IP exposed as LoadBalancer work well (open on Web browser or telnet). However, when I use telnet to connect gloo API Gateway on GKE cloud shell, my connection was refused.
Depends on relational causes and solutions, I have allow traffic to flow into cluster by creating firewall rule:
gcloud compute firewall-rules create my-rule --allow=all
How can I do for it?
kubectl get -n gloo-system service/gateway-proxy-v2 -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"gloo","gateway-proxy-id":"gateway-proxy-v2","gloo":"gateway-proxy"},"name":"gateway-proxy-v2","namespace":"gloo-system"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443}],"selector":{"gateway-proxy":"live","gateway-proxy-id":"gateway-proxy-v2"},"type":"LoadBalancer"}}
labels:
app: gloo
gateway-proxy-id: gateway-proxy-v2
gloo: gateway-proxy
name: gateway-proxy-v2
namespace: gloo-system
spec:
clusterIP: 10.122.10.215
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30189
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 30741
port: 443
protocol: TCP
targetPort: 8443
selector:
gateway-proxy: live
gateway-proxy-id: gateway-proxy-v2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.xx.xx.xx
kubectl get svc -n gloo-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway-proxy-v2 LoadBalancer 10.122.10.215 34.xx.xx.xx 80:30189/TCP,443:30741/TCP 63m
gloo ClusterIP 10.122.5.253 <none> 9977/TCP 63m
You can try bumping to Gloo version 1.3.6.
Please take a look at https://docs.solo.io/gloo/latest/upgrading/1.0.0/ to track any possible breaking changes.