K8 / Cert-Manager / Errors and Possible YAML misconfiguration - kubernetes

I am attempting to get cert-manager working with Let's Encrypt and I am running into some interesting errors, such as cert-manager not being able to access resources. I am also seeing two certificates and two certificaterequests when I would only expect one. I've attached some pictures of logging and output from the certs and cert requests. I've tried quite a few adjustments but I seem to be spinning my wheels. Any help is greatly appreciated!
---
kind: Secret
apiVersion: v1
metadata:
name: coreyperkinsdev-production-clusterissuer-acme
namespace: default
data:
coreyperkinsdevacmedns.json: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEx3U3hZcVRzTWpuejZrVytUWVl5cDh1cFRaZkFWc1U1bGQvZzE4aXJ6T1pwaEZyCkRwdnJ5TkhLam9CMWJ2TjNtdDM0VkxWK2JQM3NHZEJCSW83MEVoTzl3VS83TVRDZVJScGtISmlCYzc3dUJRUGgKTjVaalZUM0lEOEIyUzZTOWFONWVMNUhaWHYrbExhd0pjUEdVYUtlcVpmVG1XTVVHOFFHTTJtK1pJWGNNY1dyUwpRZ1hSSnM5anUzQndsWGZGR1hOeGJ6SnlyL3ZKZkwrVkU4Y09YN1h0WlBtQ3BjWEd2TVUzQ3BuaTJJYTJiUmxJClFvb0w3N2I4SWJDTm12VVM3Y3U3c2J3cTRVbUJUaWRUYVU3UFRtQjJuMXZRTm9uZjllRVRkSDkwYkpURFFadDQKRURaa0kzeU1QbXl5NU5tV3poOEViVGFSQ2VjU05PM2ZkSHNQQndJREFRQUJBb0lCQUZ6T2VzRXZjTG1GNE90VQpnN1NDMytZa0tYcXkyY1JEZGc3MS9VZURjYmNQd3lWd3FrMjlLUzFjOVN1SnNVUEJCN2pocEtReThuR2JUa0xQCkdyTlpQdUdOZTVJcHRPNUViZkZFSlFZK0ZiTk81c0J5aHBMWnliWTYzR0dpVGR4NEFyODMrSnRxeEhrd3d5d1oKdkZ0ZjRmcC9wbE5tbDJZYi9uMkJjMGV6cUtseXNHbzZDL3IxdmRZZ0RqTEtzT1I0MTF4WS8rN0xuS2JBQWpVcApoQTRseG5LaFpjMzZONmFjQVowdGZPL1FzcXpqNWRsQTduMlgvQ1lSRkxXdjZML09PdHZlWlZCeGtqT0xQL3hxClhKZXJjSE0vMmN2YVZtbzVTM2ltRlNWNkxKeEF4UW10T1E4NUdTVHRXMWVIQUsrQldXL000eHdUM0NPcUx1aDIKSXdQcUlha0NnWUVBMWRxRkc5N3NQZE80bUFPdUcyWUJ5VkpnWXhrYzBvK0trTm1ITXc4YkJDanpZdXJDaTVFUAptZkFJV25DYUxQaSsrSE9FbXF3M0RVY1lMOFVWcHJkMkpBVG96VVFWcXBJbHUzQ3RzenVicGF6Rk9pUVBRVHZwCnp1dmNEMFpHeStsNE9OK2lYdllQQVBzbGQreHYzcFQxOWRsWFBVRGJYUHNvRXFsZ0V3bDNsN1VDZ1lFQTJGcWQKbExJSVlwWlVIUjFJbTI5c1J6Y2RyRjNjd21pd1lnVEVmT09QVVpwY05TVnkvc1hmOFkveEFkWHo2bkVsaGhXMApxUGtzbHJjdm9XM29DcXF4dElwQ3JCcUxWTjg4azFHZjNjUFRJR1FkeDh5ajVKdDJtbFRpU3Eyc3kxMUpUa3FPCmVzdk0wS1ZJT0lQMm11dXVaeFNpNEgxakE1Nlp2QXI2cnFMVFNVc0NnWUEwaVM1U0huMmk0clJpZytUdHppMTYKSzhhS0VjMUczUVNKZVNjQm9DQmU4VUI1ZUhxNmxyUmllTmxVZm4waHR5b1RGeTNvWVk1VXNMWjhaY3BmM29vagpaeUZaNi9QMnAxaWxwNVRFaDB4QmN5UXdtRk0zRDJUczlIeG5ORGlJTjU3Vk9mdEZvT1VtdEl3TDNnWE5oSUs0CkZ1Q2JwNmM4UEdjbnpueFBzTyswVVFLQmdISE0rQ1pHbnZKOGNESUFQVGpGR3djNmpua2p4Z0xjWGlxd3AwbXAKeUxEN3FKU3I1aGpzckNhN3QrRm5VSzE0Wm14bzdtWVM2c2s4QWVtL2pkWk9ncnFjSHdXMzBLSUw2aWp6UGt1Lwp2VVhFWTRXOHRsaUJEWm1RSEpkN1V2Q0ZXUkc5VmNSeGZvSWc3aVFNQmFMblpRMERaY2ovS3gyMFJ0a0tUV0dlCmM5U1JBb0dCQUpZY1hpU3RRNkVyM0xqeit2TTUzN05NdENIVFdBZ05yZzM0ZWdYMzJyZVZZdkd5amhKYk1yaEwKSkxxT0U5NmE3OUJZbDRyZERoNUliL1psUmhOUUpleENrRVFLUm12QmkwVHFUZ21EZmhxa3plOVRKTllRbndBRApHMU5xL0VaN2RCQ3gvcmlhbTRxUktxc0FwWDhWcjc3NGFKSCsvcFBuZ0xucHI3emFib0hyCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEx3U3hZcVRzTWpuejZrVytUWVl5cDh1cFRaZkFWc1U1bGQvZzE4aXJ6T1pwaEZyCkRwdnJ5TkhLam9CMWJ2TjNtdDM0VkxWK2JQM3NHZEJCSW83MEVoTzl3VS83TVRDZVJScGtISmlCYzc3dUJRUGgKTjVaalZUM0lEOEIyUzZTOWFONWVMNUhaWHYrbExhd0pjUEdVYUtlcVpmVG1XTVVHOFFHTTJtK1pJWGNNY1dyUwpRZ1hSSnM5anUzQndsWGZGR1hOeGJ6SnlyL3ZKZkwrVkU4Y09YN1h0WlBtQ3BjWEd2TVUzQ3BuaTJJYTJiUmxJClFvb0w3N2I4SWJDTm12VVM3Y3U3c2J3cTRVbUJUaWRUYVU3UFRtQjJuMXZRTm9uZjllRVRkSDkwYkpURFFadDQKRURaa0kzeU1QbXl5NU5tV3poOEViVGFSQ2VjU05PM2ZkSHNQQndJREFRQUJBb0lCQUZ6T2VzRXZjTG1GNE90VQpnN1NDMytZa0tYcXkyY1JEZGc3MS9VZURjYmNQd3lWd3FrMjlLUzFjOVN1SnNVUEJCN2pocEtReThuR2JUa0xQCkdyTlpQdUdOZTVJcHRPNUViZkZFSlFZK0ZiTk81c0J5aHBMWnliWTYzR0dpVGR4NEFyODMrSnRxeEhrd3d5d1oKdkZ0ZjRmcC9wbE5tbDJZYi9uMkJjMGV6cUtseXNHbzZDL3IxdmRZZ0RqTEtzT1I0MTF4WS8rN0xuS2JBQWpVcApoQTRseG5LaFpjMzZONmFjQVowdGZPL1FzcXpqNWRsQTduMlgvQ1lSRkxXdjZML09PdHZlWlZCeGtqT0xQL3hxClhKZXJjSE0vMmN2YVZtbzVTM2ltRlNWNkxKeEF4UW10T1E4NUdTVHRXMWVIQUsrQldXL000eHdUM0NPcUx1aDIKSXdQcUlha0NnWUVBMWRxRkc5N3NQZE80bUFPdUcyWUJ5VkpnWXhrYzBvK0trTm1ITXc4YkJDanpZdXJDaTVFUAptZkFJV25DYUxQaSsrSE9FbXF3M0RVY1lMOFVWcHJkMkpBVG96VVFWcXBJbHUzQ3RzenVicGF6Rk9pUVBRVHZwCnp1dmNEMFpHeStsNE9OK2lYdllQQVBzbGQreHYzcFQxOWRsWFBVRGJYUHNvRXFsZ0V3bDNsN1VDZ1lFQTJGcWQKbExJSVlwWlVIUjFJbTI5c1J6Y2RyRjNjd21pd1lnVEVmT09QVVpwY05TVnkvc1hmOFkveEFkWHo2bkVsaGhXMApxUGtzbHJjdm9XM29DcXF4dElwQ3JCcUxWTjg4azFHZjNjUFRJR1FkeDh5ajVKdDJtbFRpU3Eyc3kxMUpUa3FPCmVzdk0wS1ZJT0lQMm11dXVaeFNpNEgxakE1Nlp2QXI2cnFMVFNVc0NnWUEwaVM1U0huMmk0clJpZytUdHppMTYKSzhhS0VjMUczUVNKZVNjQm9DQmU4VUI1ZUhxNmxyUmllTmxVZm4waHR5b1RGeTNvWVk1VXNMWjhaY3BmM29vagpaeUZaNi9QMnAxaWxwNVRFaDB4QmN5UXdtRk0zRDJUczlIeG5ORGlJTjU3Vk9mdEZvT1VtdEl3TDNnWE5oSUs0CkZ1Q2JwNmM4UEdjbnpueFBzTyswVVFLQmdISE0rQ1pHbnZKOGNESUFQVGpGR3djNmpua2p4Z0xjWGlxd3AwbXAKeUxEN3FKU3I1aGpzckNhN3QrRm5VSzE0Wm14bzdtWVM2c2s4QWVtL2pkWk9ncnFjSHdXMzBLSUw2aWp6UGt1Lwp2VVhFWTRXOHRsaUJEWm1RSEpkN1V2Q0ZXUkc5VmNSeGZvSWc3aVFNQmFMblpRMERaY2ovS3gyMFJ0a0tUV0dlCmM5U1JBb0dCQUpZY1hpU3RRNkVyM0xqeit2TTUzN05NdENIVFdBZ05yZzM0ZWdYMzJyZVZZdkd5amhKYk1yaEwKSkxxT0U5NmE3OUJZbDRyZERoNUliL1psUmhOUUpleENrRVFLUm12QmkwVHFUZ21EZmhxa3plOVRKTllRbndBRApHMU5xL0VaN2RCQ3gvcmlhbTRxUktxc0FwWDhWcjc3NGFKSCsvcFBuZ0xucHI3emFib0hyCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coreyperkins-deployment
labels:
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: coreyperkins-frontend
version: v1
template:
metadata:
labels:
app: coreyperkins-frontend
version: v1
spec:
containers:
- name: coreyperkins-frontend
image: coreyperkinsdev.azurecr.io/www:52
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: coreyperkinsdev-acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: coreyperkins-service
labels:
app: coreyperkins-frontend
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 80
name: http
selector:
app: coreyperkins-frontend
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: coreyperkinsdev-production-clusterissuer
namespace: cert-manager
spec:
acme:
email: corey.perkins#gmail.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: coreyperkinsdev-production-clusterissuer-acme
solvers:
- dns01:
acmedns:
host: https://acme-staging-v02.api.letsencrypt.org/directory
accountSecretRef:
name: coreyperkinsdev-production-clusterissuer-acme
key: coreyperkinsdevacmedns.json
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: coreyperkinsdev-production-certificate
namespace: cert-manager
spec:
secretName: coreyperkinsdev-production-clusterissuer-acme
issuerRef:
name: coreyperkinsdev-production-clusterissuer
kind: ClusterIssuer
commonName: coreyperkins.dev
dnsNames:
- coreyperkins.dev
- '*.coreyperkins.dev'
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: coreyperkinsdev-ingress
namespace: default
annotations:
cert-manager.io/cluster-issuer: coreyperkinsdev-production-clusterissuer
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- coreyperkins.dev
- '*.coreyperkins.dev'
secretName: coreyperkinsdev-production-clusterissuer-acme
rules:
- host: www.coreyperkins.dev
- http:
paths:
- path: /?(.*)
backend:
serviceName: coreyperkins-service
servicePort: 5000
---
kind: ConfigMap
apiVersion: v1
metadata:
name: coreyperkinsdev-ng-cm
data:
http-snippet: |
types {
module;
}
---

Related

Microk8s/Kubernetes does not use the Let's Encrypt auto-generated certificate

Having the following k8s config:
---
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: test
name: test-depl
spec:
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: jfsanchez91/http-test-server
---
apiVersion: v1
kind: Service
metadata:
namespace: test
name: test-svc
spec:
selector:
app: test-app
ports:
- name: test-app
protocol: TCP
port: 80
targetPort: 8090
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
namespace: test
name: letsencrypt-cert-issuer-test-staging
spec:
acme:
email: email#example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-cert-issuer-test-staging
solvers:
- http01:
ingress:
class: public
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
namespace: test
name: letsencrypt-cert-issuer-test-prod
spec:
acme:
email: email#example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-cert-issuer-test-prod
solvers:
- http01:
ingress:
class: public
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: ingress-routes
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-cert-issuer-test-prod"
spec:
tls:
- hosts:
- test.example.com
secretName: tls-secret
rules:
- host: test.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-svc
port:
number: 80
The Let's Encrypt certificate is being issued and stored in tls-secret correctly.
But then when I try to open test.example.com I get an invalid certificate (the K8s default certificate) NET::ERR_CERT_AUTHORITY_INVALID.
Common Name (CN): Kubernetes Ingress Controller Fake Certificate
Organization (O): Acme Co
Q: How can I configure Ingress correctly to use the Let's Encrypt certificate?
Q: Is there anything else I should configure?
UPDATE: tls-secret type (kubernetes.io/tls):
$ kubectl -n test describe secrets tls-secret
Name: tls-secret
Namespace: test
Labels: <none>
Annotations: cert-manager.io/alt-names: test.example.com
cert-manager.io/certificate-name: tls-secret
cert-manager.io/common-name: test.example.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group: cert-manager.io
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-cert-issuer-test-prod
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.key: 1679 bytes
tls.crt: 5599 bytes
I'd recommand setting the certificate your self in order to have more control on subdomains to include and renewal policy
kubectl -n $NAMESPACE apply -f certificate.yaml
For example, for a DNS hosted on Azure DNS zone
#certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert-wildcard
spec:
duration: 2160h # 90d
renewBefore: 360h # 15d
secretName: cert-wildcard
issuerRef: #from issuer.yaml
name: letsencrypt-prod
kind: ClusterIssuer
commonName: domain.com # go to domaine, go to certificate, go to Details, go to Common Name
dnsNames: #list of all different domains associeted with the certificate
- domain.com
- sub.domain.com
acme:
config:
- dns01:
provider: azure-dns
domains:
- domain.com
- sub.domain.com

How To Enable SSL on Google Kubernetes Engine while using ingress-nginx?

I am using GKE with ingress-nginx (https://kubernetes.github.io/ingress-nginx/). I tried many tutorials using cert-manager but was unable to learn it.
Could you give me a yaml file as an example if you are able to get SSL working with ingress-nginx in google kubernetes engine?
You can use this as a starting point and expand on it
apiVersion: apps/v1
kind: Deployment
metadata:
name: arecord-depl
spec:
replicas: 1
selector:
matchLabels:
app: arecord
template:
metadata:
labels:
app: arecord
spec:
containers:
- name: arecord
image: gcr.io/clear-shell-346807/arecord
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
---
apiVersion: v1
kind: Service
metadata:
name: arecord-srv
spec:
selector:
app: arecord
ports:
- name: arecord
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: ssl-ip
spec:
tls:
- hosts:
- vareniyam.me
secretName: echo-tls
rules:
- host: vareniyam.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: arecord-srv
port:
number:
8080
You have said you're using nginx ingress, but your ingress class is saying gce:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
You have not indicated which ClusterIssuer or Issuer you want to use. cert-manager issues certificates only after you tell it you want it to create a certificate
I am unsure what tutorials you have tried, but have you tried looking at the cert-manager docs here: https://cert-manager.io/docs/

GKE BackendConfig not working with customRequestHeaders

I have a nodejs application running on Google Kubernetes Engine (v1.20.8-gke.900)
I want to add custom header to get client's Region and lat long so I refer to this article and this one also and created below kubernetes config file, but when I am printing the header I am not getting any custom header.
#k8s.yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-app-ns-prod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: npm-app-deployment
namespace: my-app-ns-prod
labels:
app: npm-app-deployment
tier: backend
spec:
template:
metadata:
name: npm-app-pod
namespace: my-app-ns-prod
labels:
app: npm-app-pod
tier: backend
spec:
containers:
- name: my-app-container
image: us.gcr.io/img/my-app:latest
ports:
- containerPort: 3000
protocol: TCP
envFrom:
- secretRef:
name: npm-app-secret
- configMapRef:
name: npm-app-configmap
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-regcred
replicas: 3
minReadySeconds: 30
selector:
matchLabels:
app: npm-app-pod
tier: backend
---
apiVersion: v1
kind: Service
metadata:
name: npm-app-service
namespace: my-app-ns-prod
annotations:
cloud.google.com/backend-config: '{"ports": {"80":"npm-app-backendconfig"}}'
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
app: npm-app-pod
tier: backend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
- name: https
protocol: TCP
port: 443
targetPort: 3000
type: LoadBalancer
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: npm-app-backendconfig
namespace: my-app-ns-prod
spec:
customRequestHeaders:
headers:
- "HTTP-X-Client-CityLatLong:{client_city_lat_long}"
- "HTTP-X-Client-Region:{client_region}"
- "HTTP-X-Client-Region-SubDivision:{client_region_subdivision}"
- "HTTP-X-Client-City:{client_city}"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api/v1
pathType: Prefix
backend:
service:
name: npm-app-service
port:
number: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: npm-app-configmap
namespace: my-app-ns-prod
data:
APP_ID: "My App"
PORT: "3000"
---
apiVersion: v1
kind: Secret
metadata:
name: npm-app-secret
namespace: my-app-ns-prod
type: Opaque
data:
MONGO_CONNECTION_URI: ""
SESSION_SECRET: ""
Actually the issue was with Ingress Controller, I missed to defined "cloud.google.com/backend-config". Once I had defined that I was able to get the custom header. Also I switched to GKE Ingress controller (gce) from nginx. But the same things works with nginx also
This is how my final ingress looks like.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
cloud.google.com/backend-config: '{"default": "npm-app-backendconfig"}'
kubernetes.io/ingress.class: "gce"
spec:
...
...
Reference: User-defined request headers :

Add header with EnvoyFilter does not work

I am testing istio 1.10.3 to add headers with minikube but I am not able to do so.
Istio is installed in the istio-system namespaces.
The namespace where the deployment is deployed is labeled with istio-injection=enabled.
In the config_dump I can see the LUA code only when the context is set to ANY. When I set it to SIDECAR_OUTBOUND the code is not listed:
"name": "envoy.lua",
"typed_config": {
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua",
"inline_code": "function envoy_on_request(request_handle)\n request_handle:headers():add(\"request-body-size\", request_handle:body():length())\nend\n\nfunction envoy_on_response(response_handle)\n response_handle:headers():add(\"response-body-size\", response_handle:body():length())\nend\n"
}
Someone can give me some tips?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: headers-envoy-filter
namespace: nginx-echo-headers
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("request-body-size", request_handle:body():length())
end
function envoy_on_response(response_handle)
response_handle:headers():add("response-body-size", response_handle:body():length())
end
workloadSelector:
labels:
app: nginx-echo-headers
version: v1
Below is my deployment and Istio configs:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-echo-headers-v1
namespace: nginx-echo-headers
labels:
version: v1
spec:
selector:
matchLabels:
app: nginx-echo-headers
version: v1
replicas: 2
template:
metadata:
labels:
app: nginx-echo-headers
version: v1
spec:
containers:
- name: nginx-echo-headers
image: brndnmtthws/nginx-echo-headers:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-echo-headers-svc
namespace: nginx-echo-headers
labels:
version: v1
service: nginx-echo-headers-svc
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: nginx-echo-headers
version: v1
---
# ISTIO GATEWAY
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-echo-headers-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "api.decchi.com.ar"
# ISTIO VIRTUAL SERVICE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-echo-headers-virtual-service
namespace: nginx-echo-headers
spec:
hosts:
- 'api.decchi.com.ar'
gateways:
- istio-system/nginx-echo-headers-gateway
http:
- route:
- destination:
# k8s service name
host: nginx-echo-headers-svc
port:
# Services port
number: 80
# workload selector
subset: v1
## ISTIO DESTINATION RULE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginx-echo-headers-dest
namespace: nginx-echo-headers
spec:
host: nginx-echo-headers-svc
subsets:
- name: "v1"
labels:
app: nginx-echo-headers
version: v1
It is only working when I configure the context in GATEWAY. The envoyFilter is running in the istio-system namespace and the workloadSelector is configured like this:
workloadSelector:
labels:
istio: ingressgateway
But my idea is to configure it in SIDECAR_OUTBOUND.
it is only working when I configure the context in GATEWAY, the envoyFilter is running in the istio-system namespace
That's correct! You should apply your EnvoyFilter in the config root namespace istio-system- in your case.
And the most important part, just omit context field, when matching your configPatches, so that this applies to both sidecars and gateways. You can see the examples of usage in this Istio Doc.
Here is an example I managed to come up with
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-x-cluster-client-ip-header
namespace: istio-system
spec:
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
patch:
operation: MERGE
value:
request_headers_to_add:
- header:
key: 'x-cluster-client-ip'
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
append: false
# the following is used to debug
response_headers_to_add:
- header:
key: 'x-cluster-client-ip'
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
append: false
https://gist.github.com/qudongfang/75cf0230c0b2291006f72cd23d45f297

Couldn't access my Kubernetes service via a traefik reverse proxy

I deploy a kubernetes cluster (1.8.8) in an cloud openstack pf (1 master with public ip adress/ 3 nodes). I want to use traefik (last version 1.6.1) as a reverse proxy for accessing my services.
Traefik was well deployed as a daemonset and I can access his GUI on port 8081. My prometheus ingress appears correctly in the traefik interface but I can't access my prometheus server UI.
Could you tell me what I am doing wrong ? Did I miss something ?
Thanks
Ingress of my prometheus:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- http:
paths:
- path: /prometheus
backend:
serviceName: prometheus-svc
servicePort: prom
My daemonset is below:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: traefik
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
hostNetwork: true # workaround
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: traefik:v1.6.1
name: traefik-ingress-lb
imagePullPolicy: Always
volumeMounts:
- mountPath: "/config"
name: "config"
resources:
requests:
cpu: 100m
memory: 20Mi
args:
- --kubernetes
- --configfile=/config/traefik.toml
volumes:
- name: config
configMap:
name: traefik-conf
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: traefik
data:
traefik.toml: |-
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[web]
address = ":8081"