Enable istio mTLS STRICT with MongoDB - mongodb

I have a technical difficulty, I am trying to enable 'STRICT' mutual TLS.
I have a stateless service (name: "my-service" / ServiceAccount / Service / Deployment) and a stateful database ( name: "database" / ServiceAccount / Service with clusterIP: None & port: 27017 / StatefulSet ).
Without PeerAuthentication, everything works well. But when I enable STRICT PeerAuthentication on 'istio-system', the service don’t start correctly (1/2 READY).
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
I tried to add a "DestinationRule" :
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: database
namespace: my-namespace
spec:
host: database
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
I tried to add an "AuthorizationPolicy":
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: database
namespace: striper
spec:
selector:
matchLabels:
app: database
rules:
- from:
- source:
principals: ["*"]
Without success...
To connect to the database, I use "database" as the host and "27017" as the port and both service and database are on the same namespace 'my-namespace'..
Any help is welcome ^_^

Related

Right namespace for VirtualService and DestinationRule Istio resources

I am trying to understand the VirtualService and DestinationRule resources in relation with the namespace which should be defined and if they are really namespaced resources or they can be considered as cluster-wide resources also.
I have the following scenario:
The frontend service (web-frontend) access the backend service (customers).
The frontend service is deployed in the frontend namespace
The backend service (customers) is deployed in the backend namespace
There are 2 versions of the backend service customers (2 deployments), one related to the version v1 and one related to the version v2.
The default behavior for the clusterIP service is to load-balance the request between the 2 deployments (v1 and v2) and my goal is by creating a DestinationRule and a VirtualService to direct the traffic only to the deployment version v1.
What I want to understand is which is the appropriate namespace to define such DestinationRule and a VirtualService resources. Should I create the necessary DestinationRule and VirtualService resources in the frontend namespace or in the backend namespace?
In the frontend namespace I have the web-frontend deployment and and the related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/web-frontend:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers.backend.svc.cluster.local'
---
kind: Service
apiVersion: v1
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
selector:
app: web-frontend
type: NodePort
ports:
- port: 80
name: http
targetPort: 8080
I have expose the web-frontend service by defining the following Gateway and VirtualService resources as follow:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-all-hosts
# namespace: default # Also working
namespace: frontend
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: web-frontend
# namespace: default # Also working
namespace: frontend
spec:
hosts:
- "*"
gateways:
- gateway-all-hosts
http:
- route:
- destination:
host: web-frontend.frontend.svc.cluster.local
port:
number: 80
In the backend namespace I have the customers v1 and v2 deployments and related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
namespace: backend
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/customers:1.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
namespace: backend
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: gcr.io/tetratelabs/customers:2.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: customers
namespace: backend
labels:
app: customers
spec:
selector:
app: customers
type: NodePort
ports:
- port: 80
name: http
targetPort: 3000
I have created the following DestinationRule and VirtualService resources to send the traffic only to the v1 deployment.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
host: customers.backend.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
hosts:
- "customers.backend.svc.cluster.local"
http:
## route - subset: v1
- route:
- destination:
host: customers.backend.svc.cluster.local
port:
number: 80
subset: v1
The question is which is the appropriate namespace to define the VR and DR resources for the customer service?
From my test I see that I can use either the frontend namespace, or the backend namespace. Why the VR,DR can be created to the frontend namespace or in the backend namespaces and in both cases are working? Which is the correct one?
Are the DestinationRule and VirtualService resources really namespaced resources or can be considered as cluster-wide resources ?
Are the low level routing rules of the proxies propagated to all envoy proxies regardless of the namespace?
A DestinationRule to actually be applied during a request needs to be on the destination rule lookup path:
-> client namespace
-> service namespace
-> the configured meshconfig.rootNamespace namespace (istio-system by default)
In your example, the "web-frontend" client is in the frontend Namespace (web-frontend.frontend.svc.cluster.local), the "customers" service is in the backend Namespace (customers.backend.svc.cluster.local), so the customers DestinationRule should be created in one of the following Namespaces: frontend, backend or istio-system. Additionally, please note that the istio-system Namespace isn't recommended unless the destination rule is really a global configuration that is applicable in all Namespaces.
To make sure that the destination rule will be applied we can use the istioctl proxy-config cluster command for the web-frontend Pod:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 - customers.frontend
customers.backend.svc.cluster.local 80 v1 customers.frontend
customers.backend.svc.cluster.local 80 v2 customers.frontend
When the destination rule is created in the default Namespace, it will not be applied during the request:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 -
For more information, see the Control configuration sharing in namespaces documentation.

Enable IAP on Ingress

I've follow the documentation about how to enable IAP on GKE.
I've:
configured the consent screen
Create OAuth credentials
Add the universal redirect URL
Add myself as IAP-secured Web App User
And write my deployment like this:
data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
When I look at my ingress I've this:
$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).
So everything look fine, but IAP is not activated, why ?
The worst is that, if I enable it manualy it work but if I redo kubectl apply -f monitoring.yaml IAP is disabled.
What am I missing ?
Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:
gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"
By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (like in backend documentation )
I've also write the value in the backend like this:
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
But doesn't work either.
Erratum:
The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).
But nevertheless, I can't enable IAP only with my backend configuration.
As suggested I've filed a bug report: https://issuetracker.google.com/issues/153475658
Solution given by Totem
Change given yaml with this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
The backend is associated with the service and not the Ingress...
Now it Works !
You did everything right, just a one small change:
The annotation should be added on the Service resource
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.
this is based on internal example im using:
beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'

How to debug QuotaSpecBinding for rate-limits in istio?

I am trying to enable the rate-limit for my istio enabled service. But it doesn't work. How do I debug if my configuration is correct?
apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 5
validDuration: 1s
overrides:
- dimensions:
engine: myEngineValue
maxAmount: 5
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
engine: destination.labels["engine"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
# - service: '*' ; I tried with this as well
- name: my-service
namespace: default
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
I tried with - service: '*' as well in the QuotaSpecBinding; but no luck.
How, do I confirm if my configuration was correct? the my-service is the kubernetes service for my deployment. (Does this have to be a VirtualService of istio for rate limits to work? Edit: Yes, it has to!)
I followed this doc except the VirtualService part.
I have a feeling somewhere in the namespaces I am doing a mistake.
You have to define the virtual service for the service my-service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
spec:
hosts:
- myservice
http:
- route:
- destination:
host: myservice
This way, you allow Istio to know which service are you host you are referring to.
In terms of debugging, I know that there is a project named Kiali that aims to leverage observability in Istio environments. I know that they have validations for some Istio and Kubernetes objects: Istio configuration browse.

How to get Kubernetes Ingress Port 80 working on baremetal single node cluster

I have a bare-metal kubernetes (v1.11.0) cluster created with kubeadm and working fine without any issues. Network with calico and made it a single node cluster using kubectl taint nodes command. (single node is a requirement).
I need to run mydockerhub/sampleweb static website image on host port 80. Assume the IP address of the ubuntu server running this kubernetes is 192.168.8.10.
How to make my static website available on 192.168.8.10:80 or a hostname mapped to it on local DNS server? (Example: frontend.sampleweb.local:80). Later I need to run other services on different port mapped to another subdomain. (Example: backend.sampleweb.local:80 which routes to a service run on port 8080).
I need to know:
Can I achieve this without a load balancer?
What resources needed to create? (ingress, deployment, etc)
What additional configurations needed on the cluster? (network policy, etc)
Much appreciated if sample yaml files are provided.
I'm new to kubernetes world. I got sample kubernetes deployments (like sock-shop) working end-to-end without any issues. I tried NodePort to access the service but instead of running it on a different port I need to run it exact port 80 on the host. I tried many ingress solutions but didn't work.
Screenshot of my setup:
I recently used traefik.io to configure a project with similar requirements to yours.
So I'll show a basic solution with traefik and ingresses.
I dedicated a whole namespace (you can use kube-system), called traefik, and created a kubernetes serviceAccount:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: traefik
name: traefik-ingress-controller
The traefik controller which is invoked by ingress rules requires a ClusterRole and its binding:
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
namespace: traefik
name: traefik-ingress-controller
The traefin controller will be deployed as daemonset (i.e. by definition one for each node in your cluster) and a Kubernetes service is dedicated to the controller:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik-ingress-lb
image: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
namespace: traefik
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
The final part requires you to create a service for each microservice in you project, here an example:
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
and also the ingress (set of rules) that will forward the request to the proper service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: traefik
name: ingress-ms-1
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-address-url
http:
paths:
- backend:
serviceName: my-svc-1
servicePort: 80
In this ingress I wrote a host URL, this will be the entry point in your cluster, so you need to resolve the name to your master K8S node. If you have more nodes which could be master, then a loadbalancer is suggested (in this case the host URL will be the LB).
Take a look to kubernetes.io documentation to have clear the concepts for kubernetes. Also traefik.io is useful.
I hope this helps you.
In addition to the andswer of Nicola Ben , You have to define an externalIPs in your traefik service, just follow the steps of Nicola Ben and add a externalIPs section to the service "my-svc-1" .
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
externalIPs:
- <IP_OF_A_NODE>
And you can define more than on externalIP.

Kubernetes basic authentication with Traefik

I am trying to configure Basic Authentication on a Nginx example with Traefik as Ingress controller.
I just create the secret "mypasswd" on the Kubernetes secrets.
This is the Ingress I am using:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: traefik
ingress.kubernetes.io/auth-secret: mypasswd
spec:
rules:
- host: nginx.mycompany.com
http:
paths:
- path: /
backend:
serviceName: nginxservice
servicePort: 80
I check in the Traefik dashboard and it appear, if I access to nginx.mycompany.com I can check the Nginx webpage, but without the basic authentication.
This is my nginx deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Nginx service:
apiVersion: v1
kind: Service
metadata:
labels:
name: nginxservice
name: nginxservice
spec:
ports:
# The port that this service should serve on.
- port: 80
# Label keys and values that must match in order to receive traffic for this service.
selector:
app: nginx
type: ClusterIP
It is popular to use basic authentication. In reference to Kubernetes documentation, you should be able to protect access to Traefik using the following steps :
Create authentication file using htpasswd tool. You'll be asked for a password for the user:
htpasswd -c ./auth
Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd.
kubectl create secret generic mysecret --from-file auth
--namespace=monitoring
Enable basic authentication by attaching annotations to Ingress object:
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
So, full example config of basic authentication can looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-dashboard
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: dashboard.prometheus.example.com
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090
You can apply the example as following:
kubectl create -f prometheus-ingress.yaml -n monitoring
This should work without any issues.
Basic Auth configuration for Kubernetes and Traefik 2 seems to have slightly changed. It took me some time to find the solution, that's why I want to share it. I use k3s btw.
Step 1 + 2 are identical to what #d0bry wrote, create the secret:
printf "my-username:`openssl passwd -apr1`\n" >> my-auth
kubectl create secret generic my-auth --from-file my-auth --namespace my-namespace
Step 3 is to create the ingress object and apply a middleware that will handle the authentication
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: my-namespace-my-auth-middleware#kubernetescrd
spec:
rules:
- host: my.domain.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
And then of course apply the configuration
kubectl apply -f my-ingress.yaml
refs:
https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/
https://doc.traefik.io/traefik/middlewares/http/basicauth/
With the latest traefik (verified with 2.7) it got even simpler. Just create a secret of type kubernetes.io/basic-auth and use that in your middleware. No need to create the username:password string first and create a secret from that.
apiVersion: v1
kind: Secret
metadata:
name: my-auth
namespace: my-namespace
type: kubernetes.io/basic-auth
data:
username: <username in base64>
password: <password in base64>
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
Note that the password is not hashed as it is with htpasswd, but only base64 encoded.
Ref docs