Enable IAP on Ingress - kubernetes

I've follow the documentation about how to enable IAP on GKE.
I've:
configured the consent screen
Create OAuth credentials
Add the universal redirect URL
Add myself as IAP-secured Web App User
And write my deployment like this:
data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
When I look at my ingress I've this:
$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).
So everything look fine, but IAP is not activated, why ?
The worst is that, if I enable it manualy it work but if I redo kubectl apply -f monitoring.yaml IAP is disabled.
What am I missing ?
Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:
gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"
By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (like in backend documentation )
I've also write the value in the backend like this:
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
But doesn't work either.
Erratum:
The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).
But nevertheless, I can't enable IAP only with my backend configuration.
As suggested I've filed a bug report: https://issuetracker.google.com/issues/153475658
Solution given by Totem
Change given yaml with this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
The backend is associated with the service and not the Ingress...
Now it Works !

You did everything right, just a one small change:
The annotation should be added on the Service resource
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.
this is based on internal example im using:
beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'

Related

ArgoCD: How to not store semi-secret information in manifests?

I'm learning K8s, so bear with me as a noob.
I'm running a single-node K3s cluster at home, and have successfully deployed the traefik/whoami application using the command below, but would like to deploy it via ArgoCD.
cat apps/whoami/whoami.yaml | envsubst | kubectl apply -f -
The manifest I created.
---
apiVersion: v1
kind: Namespace
metadata:
name: k3s-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deploy
namespace: k3s-test
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami:v1.8.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami-svc
namespace: k3s-test
labels:
service: whoami
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
selector:
app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: k3s-test
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: whoami.${DOMAIN_NAME}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-svc
port:
number: 80
I want to publish my code to GitHub so that ArgoCD can sync it, but don't want to expose information that is not necessarily secret, but not necessarily public. Currently, my domain name is set as an environment variable (because I don't want to commit mydomain.com to my GitHub repo) and I'm using envsubst when running kubectl apply. Does ArgoCD have similar functionality? I found this GitHub issue showing ArgoCD probably doesn't have variable interpolation, but is there an alternative? Or do I need to store my domain name as a full-on K8s secret?

Right namespace for VirtualService and DestinationRule Istio resources

I am trying to understand the VirtualService and DestinationRule resources in relation with the namespace which should be defined and if they are really namespaced resources or they can be considered as cluster-wide resources also.
I have the following scenario:
The frontend service (web-frontend) access the backend service (customers).
The frontend service is deployed in the frontend namespace
The backend service (customers) is deployed in the backend namespace
There are 2 versions of the backend service customers (2 deployments), one related to the version v1 and one related to the version v2.
The default behavior for the clusterIP service is to load-balance the request between the 2 deployments (v1 and v2) and my goal is by creating a DestinationRule and a VirtualService to direct the traffic only to the deployment version v1.
What I want to understand is which is the appropriate namespace to define such DestinationRule and a VirtualService resources. Should I create the necessary DestinationRule and VirtualService resources in the frontend namespace or in the backend namespace?
In the frontend namespace I have the web-frontend deployment and and the related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/web-frontend:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers.backend.svc.cluster.local'
---
kind: Service
apiVersion: v1
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
selector:
app: web-frontend
type: NodePort
ports:
- port: 80
name: http
targetPort: 8080
I have expose the web-frontend service by defining the following Gateway and VirtualService resources as follow:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-all-hosts
# namespace: default # Also working
namespace: frontend
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: web-frontend
# namespace: default # Also working
namespace: frontend
spec:
hosts:
- "*"
gateways:
- gateway-all-hosts
http:
- route:
- destination:
host: web-frontend.frontend.svc.cluster.local
port:
number: 80
In the backend namespace I have the customers v1 and v2 deployments and related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
namespace: backend
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/customers:1.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
namespace: backend
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: gcr.io/tetratelabs/customers:2.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: customers
namespace: backend
labels:
app: customers
spec:
selector:
app: customers
type: NodePort
ports:
- port: 80
name: http
targetPort: 3000
I have created the following DestinationRule and VirtualService resources to send the traffic only to the v1 deployment.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
host: customers.backend.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
hosts:
- "customers.backend.svc.cluster.local"
http:
## route - subset: v1
- route:
- destination:
host: customers.backend.svc.cluster.local
port:
number: 80
subset: v1
The question is which is the appropriate namespace to define the VR and DR resources for the customer service?
From my test I see that I can use either the frontend namespace, or the backend namespace. Why the VR,DR can be created to the frontend namespace or in the backend namespaces and in both cases are working? Which is the correct one?
Are the DestinationRule and VirtualService resources really namespaced resources or can be considered as cluster-wide resources ?
Are the low level routing rules of the proxies propagated to all envoy proxies regardless of the namespace?
A DestinationRule to actually be applied during a request needs to be on the destination rule lookup path:
-> client namespace
-> service namespace
-> the configured meshconfig.rootNamespace namespace (istio-system by default)
In your example, the "web-frontend" client is in the frontend Namespace (web-frontend.frontend.svc.cluster.local), the "customers" service is in the backend Namespace (customers.backend.svc.cluster.local), so the customers DestinationRule should be created in one of the following Namespaces: frontend, backend or istio-system. Additionally, please note that the istio-system Namespace isn't recommended unless the destination rule is really a global configuration that is applicable in all Namespaces.
To make sure that the destination rule will be applied we can use the istioctl proxy-config cluster command for the web-frontend Pod:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 - customers.frontend
customers.backend.svc.cluster.local 80 v1 customers.frontend
customers.backend.svc.cluster.local 80 v2 customers.frontend
When the destination rule is created in the default Namespace, it will not be applied during the request:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 -
For more information, see the Control configuration sharing in namespaces documentation.

Kubernetes API to create a CRD using Minikube, with deployment pod in pending state

I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example

Ambassador Edge Stack Host not generating ACME certificate

I installed ambassador edge stack 1.4.2 community edition and added the host file as following.
---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: ambassador-host
spec:
hostname: quote.svc.ambassador.dev.platformer.com
acmeProvider:
email: nilesh93.j#gmail.com
This gets stuck in the following stage
NAME HOSTNAME STATE PHASE COMPLETED PHASE PENDING AGE
ambassador-host quote.svc.ambassador.dev.platformer.com Error ACMEUserRegistered ACMECertificateChallenge 48s
This is my mappings file which I chose to run the example quote service.
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: quote
namespace: ambassador
spec:
prefix: /quote
service: http://quote.ambassador.svc
host: quote.svc.ambassador.dev.platformer.com
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: acme-challenge-mapping
namespace: ambassador
spec:
rewrite: ""
prefix: /.well-known/acme-challenge
service: http://quote.ambassador.svc
host: quote.svc.ambassador.dev.platformer.com
Any idea on how to fix this?
I had same issue with ambassador . I have installed cert-manager on kubernetes through this command.
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml
check this https://www.getambassador.io/docs/latest/howtos/cert-manager/ for information .
You need to create ClusterIssuer and Certificate for Host.
Note:- "Cert-manager will automatically create and renew TLS certificates and store them in Kubernetes secrets for easy use in a cluster. Ambassador will automatically watch for secret changes and reload certificates upon renewal."
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: acme-challenge-mapping
spec:
prefix: /.well-known/acme-challenge/
rewrite: ""
service: acme-challenge-service
---
apiVersion: v1
kind: Service
metadata:
name: acme-challenge-service
spec:
ports:
- port: 80
targetPort: 8089
selector:
acme.cert-manager.io/http01-solver: "true"

Kubernetes basic authentication with Traefik

I am trying to configure Basic Authentication on a Nginx example with Traefik as Ingress controller.
I just create the secret "mypasswd" on the Kubernetes secrets.
This is the Ingress I am using:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: traefik
ingress.kubernetes.io/auth-secret: mypasswd
spec:
rules:
- host: nginx.mycompany.com
http:
paths:
- path: /
backend:
serviceName: nginxservice
servicePort: 80
I check in the Traefik dashboard and it appear, if I access to nginx.mycompany.com I can check the Nginx webpage, but without the basic authentication.
This is my nginx deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Nginx service:
apiVersion: v1
kind: Service
metadata:
labels:
name: nginxservice
name: nginxservice
spec:
ports:
# The port that this service should serve on.
- port: 80
# Label keys and values that must match in order to receive traffic for this service.
selector:
app: nginx
type: ClusterIP
It is popular to use basic authentication. In reference to Kubernetes documentation, you should be able to protect access to Traefik using the following steps :
Create authentication file using htpasswd tool. You'll be asked for a password for the user:
htpasswd -c ./auth
Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd.
kubectl create secret generic mysecret --from-file auth
--namespace=monitoring
Enable basic authentication by attaching annotations to Ingress object:
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
So, full example config of basic authentication can looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-dashboard
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: dashboard.prometheus.example.com
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090
You can apply the example as following:
kubectl create -f prometheus-ingress.yaml -n monitoring
This should work without any issues.
Basic Auth configuration for Kubernetes and Traefik 2 seems to have slightly changed. It took me some time to find the solution, that's why I want to share it. I use k3s btw.
Step 1 + 2 are identical to what #d0bry wrote, create the secret:
printf "my-username:`openssl passwd -apr1`\n" >> my-auth
kubectl create secret generic my-auth --from-file my-auth --namespace my-namespace
Step 3 is to create the ingress object and apply a middleware that will handle the authentication
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: my-namespace-my-auth-middleware#kubernetescrd
spec:
rules:
- host: my.domain.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
And then of course apply the configuration
kubectl apply -f my-ingress.yaml
refs:
https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/
https://doc.traefik.io/traefik/middlewares/http/basicauth/
With the latest traefik (verified with 2.7) it got even simpler. Just create a secret of type kubernetes.io/basic-auth and use that in your middleware. No need to create the username:password string first and create a secret from that.
apiVersion: v1
kind: Secret
metadata:
name: my-auth
namespace: my-namespace
type: kubernetes.io/basic-auth
data:
username: <username in base64>
password: <password in base64>
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
Note that the password is not hashed as it is with htpasswd, but only base64 encoded.
Ref docs