Ambassador Edge Stack Host not generating ACME certificate - kubernetes

I installed ambassador edge stack 1.4.2 community edition and added the host file as following.
---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: ambassador-host
spec:
hostname: quote.svc.ambassador.dev.platformer.com
acmeProvider:
email: nilesh93.j#gmail.com
This gets stuck in the following stage
NAME HOSTNAME STATE PHASE COMPLETED PHASE PENDING AGE
ambassador-host quote.svc.ambassador.dev.platformer.com Error ACMEUserRegistered ACMECertificateChallenge 48s
This is my mappings file which I chose to run the example quote service.
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: quote
namespace: ambassador
spec:
prefix: /quote
service: http://quote.ambassador.svc
host: quote.svc.ambassador.dev.platformer.com
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: acme-challenge-mapping
namespace: ambassador
spec:
rewrite: ""
prefix: /.well-known/acme-challenge
service: http://quote.ambassador.svc
host: quote.svc.ambassador.dev.platformer.com
Any idea on how to fix this?

I had same issue with ambassador . I have installed cert-manager on kubernetes through this command.
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml
check this https://www.getambassador.io/docs/latest/howtos/cert-manager/ for information .
You need to create ClusterIssuer and Certificate for Host.
Note:- "Cert-manager will automatically create and renew TLS certificates and store them in Kubernetes secrets for easy use in a cluster. Ambassador will automatically watch for secret changes and reload certificates upon renewal."
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: acme-challenge-mapping
spec:
prefix: /.well-known/acme-challenge/
rewrite: ""
service: acme-challenge-service
---
apiVersion: v1
kind: Service
metadata:
name: acme-challenge-service
spec:
ports:
- port: 80
targetPort: 8089
selector:
acme.cert-manager.io/http01-solver: "true"

Related

Enable istio mTLS STRICT with MongoDB

I have a technical difficulty, I am trying to enable 'STRICT' mutual TLS.
I have a stateless service (name: "my-service" / ServiceAccount / Service / Deployment) and a stateful database ( name: "database" / ServiceAccount / Service with clusterIP: None & port: 27017 / StatefulSet ).
Without PeerAuthentication, everything works well. But when I enable STRICT PeerAuthentication on 'istio-system', the service don’t start correctly (1/2 READY).
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
I tried to add a "DestinationRule" :
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: database
namespace: my-namespace
spec:
host: database
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
I tried to add an "AuthorizationPolicy":
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: database
namespace: striper
spec:
selector:
matchLabels:
app: database
rules:
- from:
- source:
principals: ["*"]
Without success...
To connect to the database, I use "database" as the host and "27017" as the port and both service and database are on the same namespace 'my-namespace'..
Any help is welcome ^_^

Enable IAP on Ingress

I've follow the documentation about how to enable IAP on GKE.
I've:
configured the consent screen
Create OAuth credentials
Add the universal redirect URL
Add myself as IAP-secured Web App User
And write my deployment like this:
data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
When I look at my ingress I've this:
$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).
So everything look fine, but IAP is not activated, why ?
The worst is that, if I enable it manualy it work but if I redo kubectl apply -f monitoring.yaml IAP is disabled.
What am I missing ?
Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:
gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"
By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (like in backend documentation )
I've also write the value in the backend like this:
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
But doesn't work either.
Erratum:
The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).
But nevertheless, I can't enable IAP only with my backend configuration.
As suggested I've filed a bug report: https://issuetracker.google.com/issues/153475658
Solution given by Totem
Change given yaml with this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
The backend is associated with the service and not the Ingress...
Now it Works !
You did everything right, just a one small change:
The annotation should be added on the Service resource
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.
this is based on internal example im using:
beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'

How to allow non-HTTP out of an istio cluster/pod

I've installed istio v1.0.5 in a k8s cluster (1 master, 2 worker nodes) and have deployed an application that requires HTTP from clients into a service and this service then needs to communicate out of the cluster. I did not use helm to install istio and the material I've read has a lot of helm examples to update the init container config to include the cluster IP cidr.
From my understanding, this is still an on-going discussion with the devs and the best way to solve this issue is to annotate the deployment with the following:
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: home-devices-deployment
namespace: home-devices-app
labels:
app: home-devices-app
annotations:
traffic.sidecar.istio.io/includeOutboundIPRanges: "10.244.0.0/16"
I put in my clusterIP CIDR but it still doesn't allow the container to connect to an external system via SSH/TCP 22.
ubuntu#k8s-master:~/applications$ kubectl cluster-info dump | grep -i cidr
"podCIDR": "10.244.0.0/24",
"podCIDR": "10.244.1.0/24"
"podCIDR": "10.244.2.0/24"
"--allocate-node-cidrs=true",
"--cluster-cidr=10.244.0.0/16",
"--node-cidr-mask-size=24",
Any help is appreciated.
--update--
I tried ServiceEntry's but still am not successful. Please remember this is a container that is SSH'ing externally.
ubuntu#k8s-master:~/applications$ kubectl get serviceentry -n home-devices-app -o yaml
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
creationTimestamp: "2019-01-10T02:45:27Z"
generation: 1
name: ex-ssh-service-entry
namespace: home-devices-app
resourceVersion: "1432196"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/home-devices- app/serviceentries/ex-ssh-service-entry
uid: c9b22284-1481-11e9-ad97-000c297d3726
spec:
addresses:
- 10.10.10.5
hosts:
- '*.ca'
location: MESH_EXTERNAL
ports:
- name: ssh
number: 22
protocol: TCP
resolution: NONE
- apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
creationTimestamp: "2019-01-10T02:45:27Z"
generation: 1
name: srx-ssh-service-entry
namespace: home-devices-app
resourceVersion: "1432197"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/home-devices- app/serviceentries/srx-ssh-service-entry
uid: c9b3b586-1481-11e9-ad97-000c297d3726
spec:
addresses:
- 10.10.10.6
hosts:
- '*.ca'
location: MESH_EXTERNAL
ports:
- name: ssh
number: 22
protocol: TCP
resolution: NONE
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Try adding a service entry like below. It worked for me.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: ext-svcentry
spec:
hosts:
- "*.com"
location: MESH_EXTERNAL
addresses:
- 11.22.33.44
ports:
- number: 8080
name: http
protocol: TCP
resolution: NONE

How to debug QuotaSpecBinding for rate-limits in istio?

I am trying to enable the rate-limit for my istio enabled service. But it doesn't work. How do I debug if my configuration is correct?
apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 5
validDuration: 1s
overrides:
- dimensions:
engine: myEngineValue
maxAmount: 5
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
engine: destination.labels["engine"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
# - service: '*' ; I tried with this as well
- name: my-service
namespace: default
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
I tried with - service: '*' as well in the QuotaSpecBinding; but no luck.
How, do I confirm if my configuration was correct? the my-service is the kubernetes service for my deployment. (Does this have to be a VirtualService of istio for rate limits to work? Edit: Yes, it has to!)
I followed this doc except the VirtualService part.
I have a feeling somewhere in the namespaces I am doing a mistake.
You have to define the virtual service for the service my-service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
spec:
hosts:
- myservice
http:
- route:
- destination:
host: myservice
This way, you allow Istio to know which service are you host you are referring to.
In terms of debugging, I know that there is a project named Kiali that aims to leverage observability in Istio environments. I know that they have validations for some Istio and Kubernetes objects: Istio configuration browse.

Kubernetes basic authentication with Traefik

I am trying to configure Basic Authentication on a Nginx example with Traefik as Ingress controller.
I just create the secret "mypasswd" on the Kubernetes secrets.
This is the Ingress I am using:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: traefik
ingress.kubernetes.io/auth-secret: mypasswd
spec:
rules:
- host: nginx.mycompany.com
http:
paths:
- path: /
backend:
serviceName: nginxservice
servicePort: 80
I check in the Traefik dashboard and it appear, if I access to nginx.mycompany.com I can check the Nginx webpage, but without the basic authentication.
This is my nginx deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Nginx service:
apiVersion: v1
kind: Service
metadata:
labels:
name: nginxservice
name: nginxservice
spec:
ports:
# The port that this service should serve on.
- port: 80
# Label keys and values that must match in order to receive traffic for this service.
selector:
app: nginx
type: ClusterIP
It is popular to use basic authentication. In reference to Kubernetes documentation, you should be able to protect access to Traefik using the following steps :
Create authentication file using htpasswd tool. You'll be asked for a password for the user:
htpasswd -c ./auth
Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd.
kubectl create secret generic mysecret --from-file auth
--namespace=monitoring
Enable basic authentication by attaching annotations to Ingress object:
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
So, full example config of basic authentication can looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-dashboard
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: dashboard.prometheus.example.com
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090
You can apply the example as following:
kubectl create -f prometheus-ingress.yaml -n monitoring
This should work without any issues.
Basic Auth configuration for Kubernetes and Traefik 2 seems to have slightly changed. It took me some time to find the solution, that's why I want to share it. I use k3s btw.
Step 1 + 2 are identical to what #d0bry wrote, create the secret:
printf "my-username:`openssl passwd -apr1`\n" >> my-auth
kubectl create secret generic my-auth --from-file my-auth --namespace my-namespace
Step 3 is to create the ingress object and apply a middleware that will handle the authentication
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: my-namespace-my-auth-middleware#kubernetescrd
spec:
rules:
- host: my.domain.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
And then of course apply the configuration
kubectl apply -f my-ingress.yaml
refs:
https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/
https://doc.traefik.io/traefik/middlewares/http/basicauth/
With the latest traefik (verified with 2.7) it got even simpler. Just create a secret of type kubernetes.io/basic-auth and use that in your middleware. No need to create the username:password string first and create a secret from that.
apiVersion: v1
kind: Secret
metadata:
name: my-auth
namespace: my-namespace
type: kubernetes.io/basic-auth
data:
username: <username in base64>
password: <password in base64>
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
Note that the password is not hashed as it is with htpasswd, but only base64 encoded.
Ref docs