Custom configmap for nginx ingress controller installed with gitlab managed apps - kubernetes

I have a nginx ingress controller installed via gitlab managed apps.
I would like to disable hsts for subdomains. I known I can disable it via a custom ConfigMap (https://kubernetes.github.io/ingress-nginx/user-guide/tls/)
But I don't know where to place this and how to name it so the gitlab ingress will pick it up.

So what I did in the end is using: https://docs.gitlab.com/ee/user/clusters/applications.html#install-using-gitlab-cicd.
So the gitlab-managed-apps are not managed via ui but with a "cluster management project".
So now I don't have to figure out how to place that config map in my cluster (and how to name it) but I can just configure the ingress controller (and everything else) via the helm chart with a simple values.yaml.
I just cloned the https://gitlab.com/gitlab-org/cluster-integration/example-cluster-applications/ example and added:
# .gitlab/managed-apps/ingress/values.yml
controller:
replicaCount: 1
config:
hsts-include-subdomains: "false"
So this is still an alpha feature but for now it works well for me :-)

Related

Kubernetes - Reconfiguring a Service to point to a new Deployment (blue/green)

I'm following along with a video explaining blue/green Deployments in Kubernetes. They have a simple example with a Deployment named blue-nginx and another named green-nginx.
The blue Deployment is exposed via a Service named bgnginx. To transfer traffic from the blue deployment to the green deployment, the Service is deleted and the green deployment is exposed via a Service with the same name. This is done with the following one-liner:
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
Obviously, this works successfully. However, I'm wondering why they don't just use kubectl edit to change the labels in the Service instead of deleting and recreating it. If I edit bgnginx and set .metadata.labels.app & .spec.selector.app to green-nginx it achieves the same thing.
Is there a benefit to deleting and recreating the Service, or am I safe just editing it?
Yes, you can follow the kubectl edit svc and edit the labels & selector there.
it's fine, however YAML and other option is suggested due to kubectl edit is error-prone approach. you might face indentation issues.
Is there a benefit to deleting and recreating the Service, or am I
safe just editing it?
It's more about following best practices, and you have YAML declarative file handy with version control if managing.
The problem with kubectl edit is that it requires a human to operate a text editor. This is a little inefficient and things do occasionally go wrong.
I suspect the reason your writeup wants you to kubectl delete the Service first is that the kubectl expose command will fail if it already exists. But as #HarshManvar suggests in their answer, a better approach is to have an actual YAML file checked into source control
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app.kubernetes.io/name: myapp
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: blue
You should be able to kubectl apply -f service.yaml to deploy it into the cluster, or a tool can do that automatically.
The problem here is that you still have to edit the YAML file (or in principle you can do it with sed) and swapping the deployment would result in an extra commit. You can use a tool like Helm that supports an extra templating layer
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: {{ .Values.color }}
In Helm I might set this up with three separate Helm releases: the "blue" and "green" copies of your application, plus a separate top-level release that just contained the Service.
helm install myapp-blue ./myapp
# do some isolated validation
helm upgrade myapp-router ./router --set color=blue
# do some more validation
helm uninstall myapp-green
You can do similar things with other templating tools like ytt or overlay layers like Kustomize. The Service's selectors: don't have to match its own metadata, and you could create a Service that matched both copies of the application, maybe for a canary pattern rather than a blue/green deployment.

Is it possible to display the Traefik IngressRoutes in the Kubernetes Dashboard?

I'm using Traefik as IngressRoute.
With kubectl api-resources it is defined as:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
...
ingressroutes traefik.containo.us/v1alpha1 true IngressRoute
...
My problem is that in Kubernetes Dashboard only ingress resources can be viewed, therefore ingressroute resources is not displayed.
How to implement the ability to see ingressroute resources instead of ingresses?
Kubernetes Dashboard does not have the ability to display Traefik IngressRoute, the same way it shows Ingress, without changing it's source code.
If you want, you can create feature request in dashboard GitHub repo, and follow Improve resource support #5232 issue. Maybe in the future such feature will be added.
In the meantime, you can use Traefik's own dashboard.

Application not showing in ArgoCD when applying yaml

I am trying to setup ArgoCD for gitops. I used the ArgoCD helm chart to deploy it to my local Docker Desktop Kubernetes cluster. I am trying to use the app of apps pattern for ArgoCD.
The problem is that when I apply the yaml to create the root app, nothing happens.
Here is the yaml (created by the command helm template apps/ -n argocd from the my public repo https://github.com/gajewa/gitops):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: http://kubernetes.default.svc
namespace: argocd
project: default
source:
path: apps/
repoURL: https://github.com/gajewa/gitops.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
The resource is created but nothing in Argo UI actually happened. No application is visible. So I tried to create the app via the Web UI, even pasting the yaml in there. The application is created in the web ui and it seems to synchronise and see the repo with the yaml templates of prometheus and argo but it doesn't actually create the prometheus application in ArgoCD. And the prometheus part of the root app is forever progressing.
Here are some screenshots:
The main page with the root application (where also argo-cd and prometheus should be visible but aren't):
And then the root app view where something is created for each template but Argo seems that it can't create kubernetes deployments/pods etc from this:
I thought maybe the CRD definitions are not present in the k8s cluster but I checked and they're there:
λ kubectl get crd
NAME CREATED AT
applications.argoproj.io 2021-10-30T16:27:07Z
appprojects.argoproj.io 2021-10-30T16:27:07Z
I've ran out of things to check why the apps aren't actually deployed. I was going by this tutorial: https://www.arthurkoziel.com/setting-up-argocd-with-helm/
the problem is you have to use the below code in your manifest file in metadata:
just please change the namespace with the name your argocd was deployed in that namespace. (default is argocd)
metadata:
namespace: argocd
From another SO post:
https://stackoverflow.com/a/70276193/13641680
It turns out that at the moment ArgoCD can only recognize application declarations made in ArgoCD namespace,
Related GitHub Issue

How to inject istio to deployments which are deployed using helm

I am trying to move our app deployment to helm and facing a obstacle with injecting istio in it. We do not have namespaces wide istio enabled, so have to inject only for specfic apps.
Tried googling and nothing came up. Did anyone came through this issue.
So far, we was running a shell script directly through ansible for injecting and deploying the app which cannot be used with helm.
I am not Istio expert but what i have found:
1 - Installing the Sidecar/More control, it can be helpful in this case to reuse specific helm labels:
policy: enabled
neverInjectSelector:
- matchExpressions:
- {key: openshift.io/build.name, operator: Exists}
2 - Dynamic Admission Webhooks in order to change the default settings during deployments,
3 - Helm templating customization + annotation, posprocessing (labeling),
annotations:
sidecar.istio.io/inject: "true"
4 - Helm Inject Plugin,
Please let me know if it helped.

Can I use two Haproxy ingress controller in two different namespaces in the same cluster?

If I have deployments for two haproxy ingress controllers in two different namespaces on the same cluster, but the functionality of haproxy ingress controller is misbehaving. I just wanted to know if I can create two haproxy deployments in two namespaces and use them effortlessly?
Yes, you can.
In order to achieve this you should use correctly configured RBAC and --namespace-whitelist flag to point to correct namespace.
HAProxy documentation says you can whitelist/blacklist namespaces HAProxy uses:
--namespace-whitelist
The controller watches all namespaces, but you can specify a specific
namespace to watch. You can specify this setting multiple times.
--namespace-blacklist
The controller watches all namespaces, but you can blacklist a
namespace that you do not want to watch for changes. You can specify
this setting multiple times.
You can customize the ingress controller Deployment resource in the file haproxy-ingress.yaml in the repository https://github.com/haproxytech/ by adding any of the following arguments under the section spec.template.spec.containers.args
You deployments should look like
spec:
serviceAccountName: haproxy-ingress-service-account
containers:
- name: haproxy-ingress
image: haproxytech/kubernetes-ingress
args:
- --default-ssl-certificate=default/tls-secret
- --configmap=default/haproxy-configmap
- --default-backend-service=haproxy-controller/ingress-default-backend
- --namespace-whitelist=mynamespace1