Are there any versioning for kubernetes ingress? - kubernetes

I want to know if there is versioning for ingress config similar to what we have in deployments. Suppose there is a misconfiguration I would like to revert to the previous config.
I would like to understand about generation in ingress YAML config.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/service-match: 'new-nginx: header("foo", /^bar$/)' #Canary release rule. In this example, the request header is used.
nginx.ingress.kubernetes.io/service-weight: 'new-nginx: 50,old-nginx: 50' #The route weight.
creationTimestamp: null
generation: 1
name: nginx-ingress
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress
spec:
rules: ##The Ingress rule.
- host: foo.bar.com
http:
paths:
- backend:
serviceName: new-nginx
servicePort: 80
path: /
- backend:
serviceName: old-nginx
servicePort: 80
path: /

Kubernetes does not offer this natively, and neither does a management tool like Rancher.
If you want to do this, you need an infra-as-code tool, like Terreform, ansible, etc. The config files for these can be versioned in a repo.
Even without those, you can independently export a give ingress yaml, and commit it to a repo.

Slightly different way to look at a solution - you could use gitOps. I mean that you could put all your yaml's in a git repo, install ArgoCD on your cluster and then simply let ArgoCD do the syncing for you. The moment you realised you've messed something up in a yaml file, just revert the commit in the git repo. That way you maintain history and get a graceful non-opinionated solution.

Related

How to apply yaml file on terraform?

How do I do this but in terraform using kubernetes provider??
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.4"
I have searched for a few, but none mention applying a direct yaml file.
You can achieve that differently; if that works for you. You want to execute kubectl from Terraform, not communicating directly with the Kubernetes API.
I think this method has more advantages than what you originally asking for, because it removes an external dependency.
There's a kubectl provider you can use. You can download the YAML and commit it with your Terraform code. That way it is also easier to review changes to the YAML (between versions).
It would look something like this:
resource "kubectl_manifest" "test" {
yaml_body = <<YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
azure/frontdoor: enabled
spec:
rules:
- http:
paths:
- path: /testpath
pathType: "Prefix"
backend:
serviceName: test
servicePort: 80
YAML
}
I would suggest you use the aws-efs-csi-driver Helm Chart and further using the helm terraform provider.
Would be easy and neat.
## refer to the provider documentation to configure accordingly this is the simplest example.
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
resource "helm_release" "aws_efs_csi_driver" {
name = "aws-efs-csi-driver"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
chart = "aws-efs-csi-driver"
set {
### Your custom values ###
}
}

SSL Certificate added but shows "Kubernetes Ingress controller fake certificate"

I am having the following issue.
I am new to GCP/Cloud, I have created a cluster in GKE and deployed our application there, installed nginx as a POD in the cluster, our company has a authorized SSL certificate which i have uploaded in Certificates in GCP.
In the DNS Service, i have created an A record which matched the IP of Ingress.
When i call the URL in the browser, it still shows that the website is still unsecure with message "Kubernetes Ingress controller fake certificate".
I used the following guide https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#console_1
however i am not able to execute step 3 "Associate an SSL certificate with a target proxy", because it asks "URL Maps" and i am not able to find it in the GCP Console.
Has anybody gone through the same issue like me or if anybody helps me out, it would be great.
Thanks and regards,
I was able to fix this problem by adding an extra argument to the ingress-nginx-controller deployment.
For context: my TLS secret was at the default namespace and was named letsencrypt-secret-prod, so I wanted to add this as the default SSL certificate for the Nginx controller.
My first solution was to edit the deployment.yaml of the Nginx controller and add at the end of the containers[0].args list the following line:
- '--default-ssl-certificate=default/letsencrypt-secret-prod'
Which made that section of the yaml look like this:
containers:
- name: controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.0-beta.0#sha256:92115f5062568ebbcd450cd2cf9bffdef8df9fc61e7d5868ba8a7c9d773e0961
args:
- /nginx-ingress-controller
- '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--ingress-class=nginx'
- '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--validating-webhook=:8443'
- '--validating-webhook-certificate=/usr/local/certificates/cert'
- '--validating-webhook-key=/usr/local/certificates/key'
- '--default-ssl-certificate=default/letsencrypt-secret-prod'
But I was using the helm chart: ingress-nginx/ingress-nginx, so I wanted this config to be in the values.yaml file of that chart so that I could upgrade it later if necessary.
So reading the values file I replaced the attribute: controller.extraArgs, which looked like this:
extraArgs: {}
For this:
extraArgs:
default-ssl-certificate: default/letsencrypt-secret-prod
This restarted the deployment with the argument in the correct place.
Now I can use ingresses without specifying the tls.secretName for each of them, which is awesome.
Here's an example ingress that is working for me with HTTPS:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: some-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- http:
paths:
- path: /some-prefix
pathType: Prefix
backend:
service:
name: some-service-name
port:
number: 80
You can save your SSL/TLS certificate into the K8s secret and attach it to the ingress.
you need to config the TLS block in ingress, dont forget to add ingress.class details in ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
rules:
- host: mydomain.com
http:
paths:
-
backend:
serviceName: my-service
servicePort: 80
path: /
tls:
- hosts:
- mydomain.com
secretName: my-tls-secret
You can read more at : https://medium.com/avmconsulting-blog/how-to-secure-applications-on-kubernetes-ssl-tls-certificates-8f7f5751d788
You might be seeing something like this in browser :
that's from the ingress controller and wrong certificate attached to ingress or ingress controller default fake cert.
in my fault, I upgraded new tsl on cattle-system name space, but not in my name-space, therefore some how, ingress recognize with K8s fake cert.
Solution: upgrade all old cert to new cert (only ingress user cert, WARNING - system cert maybe damaged your system - cannot explain)
Also don't forget to check that Subject Alternative Name actually contains the same value the CN contains. If it does not, the certificate is not valid because the industry moves away from CN. Just learned this now

Kubernetes and failover service

I have next situation and I am not sure if it is possible to achieve it using kubernetes only.
There are two pods, one with live service is available on e.g. givemedata.example.com, temporary one on givemedata2.example.com. Temporary one is without any dependency and there just to allow some long term jobs to continue run while real one is down for maintenance.
Cluster is on GKE.
I need to setup failover so that if live one is down for maintenance, failover should redirect calls to temporary one.
Main reason is to use only one endpoint, e.g. givemedata.example.com.
My question is if this is possible and how to do it?
You can install NGINX ingress controller. Add
nginx.ingress.kubernetes.io/server-snippet: annotation to ingress configuration file and add block of code which will check if domain is available, if not it will redirect traffic to different one.
Thanks to this annotation it is possible to add custom configuration in the server configuration block.
Read more: server snippet.
Remember to add this server to nginx.conf file in /etc/nginx directory.
Here is example cofiguration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url : "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
set $agentflag 0;
if ($http_user_agent ~* "(Mobile)" ){
set $agentflag 1;
}
if ( $agentflag = 1 ) {
return 301 https://givemedata2.example.com;
}
spec:
rules:
- host: givemedata.example.com
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port

Trying to rewrite url for Grafana with Ingress

In my kubernetes cluster I would like to do monitoring so I installed grafana.
I would like to access the grafana dashboard as http://example.com/monitoring, so I tried to include this in my ingress configuration
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.com
http:
paths:
- path: /monitoring/(.*)
backend:
serviceName: grafana
servicePort: 80
The idea is to add other paths there as well, for example / for the website.
I noticed that Grafana redirects http://example.com/monitoring to http://example.com/login. Of course this should have been http://example.com/monitoring/login. What would be the preferred way to fix this. Can it be done with ingress or should I somehow tell Grafana that it is behind a /monitoring path (if possible)?
I've installed grafana using this using Helm.
UPDATE: I've modified as suggested below the grafana chart's file values.yaml as follows
grafana.ini:
server:
domain: example.com
root_url: http://example.com/monitoring/
Now I get:
And the heml command I use to install grafana:
$> helm install stable/grafana -f values.yaml --set persistence.enabled=true --set persistence.accessModes={ReadWriteOnce} --set persistence.size=8Gi -n grafana
This is a common problem with services that are behind an HTTP reverse-proxy. Luckily, Grafana offers a way to let it know the context path it is running behind.
In grafana.ini (which is most possibly supplied via a ConfigMap to its Kubernetes deployment), you need to specify the variables like the following:
[server]
domain = example.com
root_url = http://example.com/monitoring/
See the full documentation here: https://grafana.com/docs/installation/behind_proxy/

Ingress for Kubernetes Wordpress

I've recently setup a Kubernetes cluster and I am brand new to all of this so it's quite a bit to take in. Currently I am trying to setup and Ingress for wordpress deployments. I am able to access through nodeport but I know nodeport is not recommended so I am trying to setup the ingress. I am not exactly sure how to do it and I can't find many guides. I followed this to setup the NGINX LB https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example and I used this to setup the WP Deployment https://docs.docker.com/ee/ucp/admin/configure/use-nfs-volumes/#inspect-the-deployment
I would like to be able to have multiple WP deployments and have an Ingress that resolves to the correct one, but I really can't find much information on it. Any help is greatly appreciated!
You can configure your ingress to forward traffic to a different service depending on path.
An example of such a confugration is this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
Read the kubernetes documentation on ingress for more info.
PS: In order for this to work you need an ingress controller like the one in the links in your question.
If you are on AWS, I highly recommend ALB ingress controller in conjunction with external-dns. These in combination with Wordpress Multisite give you some powerful options when it comes to providing dynamic ingress to new sites.
If you start running into any wonky issues (e.g. unable to login to the admin, redirect loops, disappearing media) after getting that all set up, I wrote a guide on some of the more common issues people run into when running Wordpress on Kubernetes, might be worth having a look!