kubernetes force service to use https - kubernetes

I want to expose k8s api's using a service. My issue is that the api only respond on port 6443 on https. Any attempt on http return status 400 bad request. How can I "force" the service to user https ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 80 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node
May be this ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 443 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node

If you are using the Nginx ingress by default it does SSL off load and sends plain HTTP in the background.
Changing port 6443 might be helpful if you request direct connecting to the service.
If you are using the Nginx ingress make sure it doesn't terminate SSL.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"

Related

How to rewrite subdomain to sub-route in DigitalOcean kubernetes?

I have deployed a nodejs/remix app to the Digitalocean Kubernetes service.
I want all my subdomains to subroute
so xxx.example.com should be rewritten as example.com/xxx. Is it possible to do with kubernetes load balancer?
My current config is
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: my-deployment
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000

Istio using IPv6 instead of IPv4

I am using Kubernetes with Minikube on a Windows 10 Home machine to "host" a gRPC service. I am working on getting Istio working in the cluster and have been running into the same issue over and over and I cannot figure out why. The problem is that once everything is up and running, the Istio gateway uses IPv6, seemingly for no reason at all. IPv6 is even disabled on my machine (via regedit) and network adapters. My other services are accessible from IPv4. Below are my steps for installing my environment:
minikube start
kubectl create namespace abc
kubectl apply -f service.yml -n abc
kubectl apply -f gateway.yml
istioctl install --set profile=default -y
kubectl label namespace abc istio-injection=enabled
Nothing is accessible over the network at this point, until I run the following in its own terminal:
minikube tunnel
Now I can access the gRPC service directly using IPv4: 127.0.0.1:5000. However, accessing the gateway is inaccessible from 127.0.0.1:443 and instead is only accessible from [::1]:443.
Here is the service.yml:
apiVersion: v1
kind: Service
metadata:
name: account-grpc
spec:
ports:
- name: grpc
port: 5000
protocol: TCP
targetPort: 5000
selector:
service: account
ipc: grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: account
ipc: grpc
name: account-grpc
spec:
replicas: 1
selector:
matchLabels:
service: account
ipc: grpc
template:
metadata:
labels:
service: account
ipc: grpc
spec:
containers:
- image: account-grpc
name: account-grpc
imagePullPolicy: Never
ports:
- containerPort: 5000
Here is the gateway.yml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /account
route:
- destination:
host: account-grpc
port:
number: 5000
And here are the results of kubectl get service istio-ingressgateway -n istio-system -o yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: ...
creationTimestamp: "2021-08-27T01:21:21Z"
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.11.1
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "4379"
uid: b4db0e2f-0f45-4814-b187-287acb28d0c6
spec:
clusterIP: 10.97.4.216
clusterIPs:
- 10.97.4.216
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 32329
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31913
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32382
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 127.0.0.1
Changing the port number to port 80 resolved my issue. The problem was that my gRPC service was not using HTTPS. Will return if I have trouble once I change the service to use HTTPS.

Cluster IP service isn't working as expected

I created nodeport service for httpd pods and cluster IP service for tomcat pods and they're in same namespace behind nginx LB. There is a weird issue with the app when http and tomcat services are not the same type. When I change both to be cluster IP or both to be NodePort then everything works fine...
Traffic flow is like this:
HTTP and HTTPS traffic -> LB -> Ingress -> Httpd -> Tomcat
HTTPS virtual host custom port traffic -> LB -> Tomcat
TCP traffic -> LB -> Tomcat
Is there anything that can cause issues between HTTPD and Tomcat? Even though I can telnet to httpd and tomcat pods from outside but for some reason the app functionality breaks (some static and jsp pages gets processed though).
httpd-service:
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
type: NodePort
selector:
app: httpd
ports:
- name: port-80
port: 80
protocol: TCP
targetPort: 80
- name: port-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
externalTrafficPolicy: Local
tocmat-service:
apiVersion: v1
kind: Service
metadata:
name: tomcat7
namespace: test-web-dev
annotations:
spec:
selector:
app: tomcat7 # Metadata label of the deployemnt pod template or pod metadata label
ports:
- name: port-8080 # Optional when its just only one port
protocol: TCP
port: 8080
targetPort: 8080
- name: port-8262
protocol: TCP
port: 8262
targetPort: 8262
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ingress lb:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
8262: "test-web-dev/tomcat7:8262"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-8262
port: 8262
protocol: TCP
targetPort: 8262
Answering my own question.
NodePort services are required when the service needs be exposed outside of the cluster like internet.
ClusterIP services are used when services needs to communicate internally like frontend to backend.
In my case, user needs to connect to both httpd and tomcat (specific app port) from outside as a result both tomcat and httpd has to be nodeport type service. Configuring tomcat has cluster IP will break the app since tomcat app port isn't reachable from internet.

haproxy source ip address shows Kubernetes node ip address

I have HAproxy running in a Kubernetes container. This is what a sample log looks like
<134>Jul 20 13:11:37 haproxy[6]: <SOURCE_ADDRESS> [20/Jul/2020:13:11:37.713] front gameApi/game-api-test 0/0/0/9/9 200 384 - - ---- 37/37/0/0/0 0/0 {<FORWARD_FOR_ADDRESS>} "GET /api/games/lists?dtype=brandlist HTTP/1.1"
The <SOURCE_ADDRESS> here is the haproxy kubernetes node ip address and i need it to be the clinet/forwardfor ip address so that filebeat is able to parse the geolocation correctly.
Edit:
I found a solution using haproxy which is to simply set http-request set-src hdr(x-forwarded-for)
However attempting to use the solution externalTrafficPolicy: Local seems to break my haproxies ability to serve website. When i try to reach a website it would say "this site can't be reached" or "Secure Connection Failed"
haproxy service
apiVersion: v1
kind: Service
metadata:
name: haproxy-test
labels:
app: haproxy-test
namespace: test
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-sload-balancer-ssl-cert: arn:aws:acm:us-east-2:redacted:certificate/redacted
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
name: http
targetPort: 80
protocol: "TCP"
- port: 443
name: https
targetPort: 80
protocol: "TCP"
selector:
app: haproxy-test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: haproxy-test
namespace: test
spec:
replicas: 1
template:
metadata:
labels:
app: haproxy-test
spec:
containers:
- name: haproxy-test
image: <redacted>.dkr.ecr.us-east-2.amazonaws.com/haproxy:$TAG
imagePullPolicy: Always
ports:
- containerPort: 80
resources:
limits:
cpu: "50m"
requests:
cpu: "25m"
You can preserve source IP by setting externalTrafficPolicy to local. Check this question for more details How do Kubernetes NodePort services with Service.spec.externalTrafficPolicy=Local route traffic?
Alternatively use http-request set-src hdr(x-forwarded-for) to configure HAProxy to use the contents of the X-Forward-For header to establish its internal concept of the source address of the request, instead of the actual IP address initiating the inbound connection.

Service Topology in k8s 1.17 doesn't works

I enabled a service topology in k8s, but when I create the following service the request is not redirected to local pod:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
topologyKeys:
- "kubernetes.io/hostname"