Istio routing, metallb and https issue - kubernetes

I am having a problem with kubernetes K3S, Istio, MetalLB and CertManager.
I have my cluster hosted on a VPS with one public ip. As my service provider don provide me with a Load Balancer, i am using MetlLb with my public Ip to reach internet with istio-ingressgateway.
In thsis cluster i have three namespaces for my applications, one for qa environment, othe for dev and the prod environment.
I configured my ip in my dns provider with my public ip, and configured CertManager to get a Certificate from letsencrypt (i am using Issuer instead of ClusterIssuer as i want to use the staging api for dev and qa and prod for prod). Certificate are issued fine, but the Gateway from istio is routing the traffic only when i use the port 80, when i enable the 443 i cant reach the site by https, getting a "ERR_CONNECTION_RESET".
I cant understand why is everyhing fine for 80, but not for the 443.
My application es exposing the traffic in the port 80 by http.
Here are my yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-v1
spec:
replicas: 3
selector:
matchLabels:
app: hello-v1
template:
metadata:
labels:
app: hello-v1
spec:
containers:
- name: hello
image: pablin.dynu.net:5000/chevaca/chevacaweb:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "500m"
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
namespace: chevaca-qa
spec:
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: qa-app-gateway
namespace: chevaca-qa
spec:
selector:
istio: ingressgateway
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- qa-app.chevaca.com
- port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: front-cert
hosts:
- qa-app.chevaca.com
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: front-app
namespace: chevaca-qa
spec:
hosts:
- qa-app.chevaca.com
gateways:
- qa-app-gateway
http:
- route:
- destination:
host: hello-v1-svc
port:
number: 80

I get a some progress, i can use https on the namespace istio-system, but no in default or any of my custon namespaces. I am searching how to fix this issue right now.

Its fixed,
the solution is to create the certificate in the istio-system namespace with the secret name ingressgateway-certs.
In that way, the certificate is mounted in the ingress gateway service from istio, and nothing else need to be configured on the Custom Resources from istio. If you have multiple namespaces , like my scenario, you can use multiple hosts on the certificate or you can use a wild card.

Related

NIFI does not work in the Kubernetes cluster. The browser is showing mixed content error since NIFI-API calls from are made to http from https host

NIFI was deployed in the EKS cluster. The article in this Link(https://jmrobles.medium.com/running-apache-nifi-on-kubernetes-5b7e95adebf3) was followed to deploy the application. I was able to bring up the application and login successfully. But when I perform certain operations on the UI, the back-end API calls fail with a mixed content error. I am using SSL termination at the ELB (classic) load balancer, so all traffic to the ingress and the pods are with port 80. Following are some screenshots of the UI and the console.
NIFI app loading and I am able to login
NIFI API calls are failing with mixed content error.
Following are the deployment, Ingress, and service yaml manifests files:
Deployment Yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nifi
namespace: default
spec:
selector:
matchLabels:
app: nifi
template:
metadata:
labels:
app: nifi
spec:
containers:
- name: nifi
image: apache/nifi:1.14.0
resources:
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 8443
env:
- name: SINGLE_USER_CREDENTIALS_USERNAME
value: "admin"
- name: SINGLE_USER_CREDENTIALS_PASSWORD
value: “XXXXX”
Service Yaml:
apiVersion: v1
kind: Service
metadata:
name: nifi-svc
namespace: default
spec:
selector:
app: nifi
ports:
- port: 8443
targetPort: 8443
Ingress Yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nifi-ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/upstream-vhost: "localhost:8443"
nginx.ingress.kubernetes.io/proxy-redirect-from: "https://localhost:8443"
nginx.ingress.kubernetes.io/proxy-redirect-to: "https://nifi.example.com"
spec:
ingressClassName: nginx
rules:
- host: nifi.example.com
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: nifi-svc
port:
number: 8443
I am not sure if this can be solved with NIFI properties or with Kubenretes ingress settings. I would like the NIFI application to make API calls with https so that there are no mixed content errors and the API calls are successful.

Istio: How to redirect to HTTPS except for /.well-known/acme-challenge

I want the traffic thar comes to my cluster as HTTP to be redirected to HTTPS. However, the cluster receives requests from hundreds of domains that change dinamically (creating new certs with cert-manager). So I want the redirect to happen only when the URI doesn't have the prefix /.well-known/acme-challenge
I am using a gateway that listens to 443 and other gateway that listens to 80 and send the HTTP to an acme-solver virtual service.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: default-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- site1.com
port:
name: https-site1.com
number: 443
protocol: HTTPS
tls:
credentialName: cert-site1.com
mode: SIMPLE
- hosts:
- site2.com
port:
name: https-site2.com
number: 443
protocol: HTTPS
tls:
credentialName: cert-site2.com
mode: SIMPLE
...
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: acme-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: acme-solver
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- acme-gateway
http:
- match:
- uri:
prefix: /.well-known/acme-challenge
route:
- destination:
host: acme-solver.istio-system.svc.cluster.local
port:
number: 8089
- redirect:
authority: # Should redirect to https://$HOST, but I don't know how to get the $HOST
How can I do that using istio?
Looking into the documentation:
The HTTP-01 challenge can only be done on port 80. Allowing clients to specify arbitrary ports would make the challenge less secure, and so it is not allowed by the ACME standard.
As a workaround:
Please consider using DNS-01 challenge:
a) it only makes sense to use DNS-01 challenges if your DNS provider has an API you can use to automate updates.
b) using this approach you should consider additional security risk as stated in the docs:
Pros:
You can use this challenge to issue certificates containing wildcard domain names.
It works well even if you have multiple web servers.
Cons:
*Keeping API credentials on your web server is risky.
Your DNS provider might not offer an API.
Your DNS API may not provide information on propagation times.
As mentioned here:
In order to be automatic, though, the software that requests the certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies.
I would suggest also another approach to use some simple nginx pod which would redirect all http traffic to https.
There is a tutorial on medium with nginx configuration you might try to use.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
---
apiVersion: v1
kind: Service
metadata:
name: redirect
labels:
app: redirect
spec:
ports:
- port: 80
name: http
selector:
app: redirect
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redirect
spec:
replicas: 1
selector:
matchLabels:
app: redirect
template:
metadata:
labels:
app: redirect
spec:
containers:
- name: redirect
image: nginx:stable
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: config
volumes:
- name: config
configMap:
name: nginx-config
Additionally you would have to change your virtual service to send all the traffic except prefix: /.well-known/acme-challenge to nginx.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: acme-solver
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- acme-gateway
http:
- name: "acmesolver"
match:
- uri:
prefix: /.well-known/acme-challenge
route:
- destination:
host: reviews.prod.svc.cluster.local
port:
number: 8089
- name: "nginx"
route:
- destination:
host: nginx

Why does using TLS lead to an upstream error when using istio in a Kubernetes Cluster

I am trying to deploy a Service in a Kubernetes Cluster. Everything works fine as long as I do not use TLS.
My Setup is like this:
Azure Kubernetes Cluster with Version 1.15.7
Istio 1.4.2
What I did so far is. Creating the Cluster and Installing Istio with the following Command:
istioctl manifest apply --set values.grafana.enabled=true \--set values.tracing.enabled=true \
--set values.tracing.provider=jaeger \
--set values.global.mtls.enabled=false \
--set values.global.imagePullPolicy=Always \
--set values.kiali.enabled=true \
--set "values.kiali.dashboard.jaegerURL=http://jaeger-query:16686" \
--set "values.kiali.dashboard.grafanaURL=http://grafana:3000"
Everything starts up and all pods are running.
Then I create a Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ddhub-ingressgateway
namespace: config
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.example.de"
# tls:
# httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*.example.de"
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*.example.de"
I then import my custom certificates which I assume also work since they are mounted correctly and when accessing my service over the browser I can see the secured connection properties with all values.
This is my deployed service:
kind: Service
apiVersion: v1
metadata:
name: hellohub-frontend
labels:
app: hellohub-frontend
namespace: dev
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
selector:
app: hellohub-frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hellohub-frontend
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: hellohub-frontend
spec:
containers:
- image: ddhubregistry.azurecr.io/hellohub-frontend:latest
imagePullPolicy: Always
name: hellohub-frontend
volumeMounts:
- name: azure
mountPath: /cloudshare
ports:
- name: http
containerPort: 8080
volumes:
- name: azure
azureFile:
secretName: cloudshare-dev
shareName: ddhub-share-dev
readOnly: true
and the Virtual Service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hellohub-frontend
namespace: dev
spec:
hosts:
- "dev-hellohub.example.de"
gateways:
- config/ddhub-ingressgateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: hellohub-frontend.dev.svc.cluster.local
port:
number: 8080
When I access the service with http. The page of my service shows up. When using https I always get "upstream connect error or disconnect/reset before headers. reset reason: connection termination".
What am I missing or what am I doing wrong? What is the difference that makes Kubernetes not finding my service. I understand that my config terminates TLS at the gateway and the communication inside the cluster is the same but this seems not to be the case.
Another question is how to enable debug logs for the Sidecars. I could not find a working way.
Thanks in advance!
Have you tried using istioctl to change log level of istio-proxy.
istioctl proxy-config log <pod-name[.namespace]> --level all:warning,http:debug,redis:debug
Seems the gateway tried to access your upstream in mtls mode through the envoy proxy, but no envoy proxy found in your container "hellohub-frontend", Have you enabled the istio-injection for your namespace "dev" or the pod, and also defined the mtls-policy?
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
spec:
peers:
- mtls:
mode: STRICT

How to set up https on kubernetes bare metal using traefik ingress controller

I'm running a kubernetes cluster which consists of three nodes and brilliantly works, but it's time to make my web application secure, so I deployed an ingress controller(traefik). But I was unable to find instructions for setting up https on it. I know most of things I will have to do, like setting up a "secret"(container with certs) etc. but I was wondering how to configure my ingress controller and all files related to it so I would be able to use secure connection
I have already configured ingress controller and created some frontends and backends. Also I configured nginx server(It's actually a web application I'm running) to work on 443 port
My web application deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: ilchub/my-nginx
ports:
- containerPort: 443
tolerations:
- key: "primary"
operator: Equal
value: "true"
effect: "NoSchedule"
Traefik ingress controller deployment code
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: https
containerPort: secure
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
Ingress for traefik dashboard
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: cluster.aws.ctrlok.dev
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
External expose related config
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
nodePort: 30036
name: web
- protocol: TCP
port: 443
nodePort: 30035
name: secure
- protocol: TCP
port: 8080
nodePort: 30034
name: admin
type: NodePort
What I want to do is securing my application which is already running. Final result has to be a webpage running over https
Actually you have 3 ways to configure Traefik to use https to communicate with backend pods:
If the service port defined in the ingress spec is 443 (note that you can still use targetPort to use a different port on your pod).
If the service port defined in the ingress spec has a name that starts with https (such as https-api, https-web or just https).
If the ingress spec includes the annotation ingress.kubernetes.io/protocol: https.
If either of those configuration options exist, then the backend communication protocol is assumed to be TLS, and will connect via TLS automatically.
Also additional authentication annotations should be added to the Ingress object, like:
ingress.kubernetes.io/auth-tls-secret: secret
And of course, add a TLS Certificate to the Ingress

Health Checks in GKE in GCloud resets after I change it from HTTP to TCP

I'm working on a Kubernetes cluster where I am directing service from GCloud Ingress to my Services. One of the services endpoints fails health check as HTTP but passes it as TCP.
When I change the health check options inside GCloud to be TCP, the health checks pass, and my endpoint works, but after a few minutes, the health check on GCloud resets for that port back to HTTP and health checks fail again, giving me a 502 response on my endpoint.
I don't know if it's a bug inside Google Cloud or something I'm doing wrong in Kubernetes. I have pasted my YAML configuration here:
namespace
apiVersion: v1
kind: Namespace
metadata:
name: parity
labels:
name: parity
storageclass
apiVersion: storage.k8s.io/v1
metadata:
name: classic-ssd
namespace: parity
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zones: us-central1-a
reclaimPolicy: Retain
secret
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: ingress-nginx
data:
tls.crt: ./config/redacted.crt
tls.key: ./config/redacted.key
statefulset
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: parity
namespace: parity
labels:
app: parity
spec:
replicas: 3
selector:
matchLabels:
app: parity
serviceName: parity
template:
metadata:
name: parity
labels:
app: parity
spec:
containers:
- name: parity
image: "etccoop/parity:latest"
imagePullPolicy: Always
args:
- "--chain=classic"
- "--jsonrpc-port=8545"
- "--jsonrpc-interface=0.0.0.0"
- "--jsonrpc-apis=web3,eth,net"
- "--jsonrpc-hosts=all"
ports:
- containerPort: 8545
protocol: TCP
name: rpc-port
- containerPort: 443
protocol: TCP
name: https
readinessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
livenessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
volumeMounts:
- name: parity-config
mountPath: /parity-config
readOnly: true
- name: parity-data
mountPath: /parity-data
volumes:
- name: parity-config
secret:
secretName: parity-config
volumeClaimTemplates:
- metadata:
name: parity-data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "classic-ssd"
resources:
requests:
storage: 50Gi
service
apiVersion: v1
kind: Service
metadata:
labels:
app: parity
name: parity
namespace: parity
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
selector:
app: parity
ports:
- name: default
protocol: TCP
port: 80
targetPort: 80
- name: rpc-endpoint
port: 8545
protocol: TCP
targetPort: 8545
- name: https
port: 443
protocol: TCP
targetPort: 443
type: LoadBalancer
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-parity
namespace: parity
annotations:
#nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: cluster-1
spec:
tls:
secretName: tls-classic
hosts:
- www.redacted.com
rules:
- host: www.redacted.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
- path: /rpc
backend:
serviceName: parity
servicePort: 8545
Issue
I've redacted hostnames and such, but this is my basic configuration. I've also run a hello-app container from this documentation here for debugging: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
Which is what the endpoint for ingress on / points to on port 8080 for the hello-app service. That works fine and isn't the issue, but just mentioned here for clarification.
So, the issue here is that, after creating my cluster with GKE and my ingress LoadBalancer on Google Cloud (the cluster-1 global static ip name in the Ingress file), and then creating the Kubernetes configuration in the files above, the Health-Check fails for the /rpc endpoint on Google Cloud when I go to Google Compute Engine -> Health Check -> Specific Health-Check for the /rpc endpoint.
When I edit that Health-Check to not use HTTP Protocol and instead use TCP Protocol, health-checks pass for the /rpc endpoint and I can curl it just fine after and it returns me the correct response.
The issue is that a few minutes after that, the same Health-Check goes back to HTTP protocol even though I edited it to be TCP, and then the health-checks fail and I get a 502 response when I curl it again.
I am not sure if there's a way to attach the Google Cloud Health Check configuration to my Kubernetes Ingress prior to creating the Ingress in kubernetes. Also not sure why it's being reset, can't tell if it's a bug on Google Cloud or something I'm doing wrong in Kubernetes. If you notice on my statefulset deployment, I have specified livenessProbe and readinessProbe to use TCP to check the port 8545.
The delay of 650 seconds was due to this ticket issue here which was solved by increasing the delay to greater than 600 seconds (to avoid mentioned race conditions): https://github.com/kubernetes/ingress-gce/issues/34
I really am not sure why the Google Cloud health-check is resetting back to HTTP after I've specified it to be TCP. Any help would be appreciated.
I found a solution where I added a new container for health check on my stateful set on /healthz endpoint, and configured the health check of the ingress to check that endpoint on the 8080 port assigned by kubernetes as an HTTP type of health-check, which made it work.
It's not immediately obvious why the reset happens when it's TCP.