Miinikube. Secrets not included in (only) one pod - kubernetes

I am testing a deployment on minikube.
It requires two pods, one is a webapp deloyment and one a chronjob
For ChronJob:
I start minikube and use
minikube addons configure registry-creds
minikube addons enable registry-creds
I use helm to deploy the the chronjob, the image pulls, the pod starts and I see the awssecr-cred in the pod
Everything works!
For the web app:
Rinse and repeat. Both are in default namespace, both deployed by helm to the same minikube deployment.
However, all secrets are missing from the webapp pod and, not surprisingly, cannot pull the image.
Can someone give me a pointer as to why the secrets are missing from this pod?

Related

Grafana & Loki agents not deployed in Tainted nodes

We are running our workloads on AKS. Basically we have Two Node-Pools.
1. System-Node-Pool: Where all system pods are running
2. Apps-Node-Pool: Where our actual workloads/ apps run in.
In fact, our Apps-Node-Pool is Tainted whereas System-Node-Pool isn't. So basically I deployed Loki-Grafana stack in order for Monitoring and for Log analysis. I'm using below Helm command to install the Grafana-Loki stack.
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=standard,loki.persistence.size=5Gi
Since the Toleration isn't added in the helm command (Or even in values.yaml) all the Grafana and Loki pods get deployed in the System-Node-Pool. But my case is, since the necessary agents aren't deployed on Apps-Node-Pool (For example: Promtail Pods) I can't check the logs of my App pods.
Since the Taint exists in Apps-Node-Pool if we add a Toleration along with the Helm command, then basically all the Monitoring related Pods will get deployed in Apps-Node-Pool (Still can't guarant as it may get deployed in System-Node-Pool since it doesn't have a Taint)
So according to my cluster, what can I do in order to make sure that Agent pods are also running in Tainted node?
So in my case my requirement was to run "Promtail" pods in Apps-Node-Pool. I haven't added a Toleration to the promtail pods, so I had to add a toleration to the Promtail pod. So successfully the Promtail pods got deployed in Apps-Node-Pool.
But still adding a Toleration in Promtail pod doesn't guarant the deployment of the Promtail pods gets deployed in Apps-Node-Pool because in my case, System-Node-Pool didn't have any Taint.
In this case you may leverage both NodeAffinity and Tolerations to exclusively deploy pods in a specific node.

Argo CD pod can't pull the image

I'm running a hybrid AKS cluster where I have one Linux and one Windows node. The Windows node will run a legacy server app. I want to use Argo CD to simplify the deployments.
After following the installation instructions (installation manifest) and installing Argo in the cluster I noticed I can't connect to its dashboard.
Troubleshooting the issue I found that the Argo pod can't pull the image. Below output of kubectl describe pod argocd-server-75b6967787-xfccz -n argocd
Another thing that is visible here is that Argo pod got assigned to a windows node. From what I found here, Argo can't run on windows nodes. I think that's the root cause of the problems.
Does anyone know how can I force Argo pods to run on Linux node?
I found that something like nodeSelector could be useful.
nodeSelector:
kubernetes.io/os: linux
But how could I apply the nodeSelector on the already deployed Argo?
Does anyone know how can I force Argo pods to run on Linux node?
check for the other node label and change the node selector
nodeSelector:
kubernetes.io/os: linux
you can directly edit the current deployment of argo CD
kubectl edit deployment argocd-server -n <namespace name>
you can edit the label directly using cli and update it
I added nodeSelector to the pod templates of argocd-server, argocd-repo-server and argocd-application-controller workloads (former two are Deployment, the latter is a StatefulSet).
After that I redeployed the manifests. The images got pulled and I was able to access the Argo dashboard.

Role of Helm install command vs kubectl command in Kubernetes cluster deployment

I have a Kubernetes cluster with 1 master node and 2 worker node. And I have another machine where I installed Helm. Actually I am trying to create Kubernetes resources using Helm chart and trying to deploy into remote Kubernetes cluster.
When I am reading about helm install command, I found that we need to use helm and kubectl command for deploying.
My confusion in here is that, when we using helm install, the created chart will deploy on Kubernetes and we can push it into chart repo also. So for deploying we are using Helm. But why we are using kubectl command with Helm?
Helm 3: No Tiller. Helm install just deploys stuff using kubectl underneath. So to use helm, you also need a configured kubectl.
Helm 2:
Helm/Tiller are client/server, helm needs to connect to tiller to initiate the deployment. Because tiller is not publicly exposed, helm uses kubectl underneath to open a tunnel to tiller. See here: https://github.com/helm/helm/issues/3745#issuecomment-376405184
So to use helm, you also need a configured kubectl. More detailed: https://helm.sh/docs/using_helm/
Chart Repo: is a different concept (same for helm2 / helm3), it's not mandatory to use. They are like artifact storage, for example in quay.io application registry you can audit who pushed and who used a chart. More detailed: https://github.com/helm/helm/blob/master/docs/chart_repository.md. You always can bypass repo and install from src like: helm install /path/to/chart/src

Deploying Images from gitlab in a new namespace in Kubernetes

I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.

Kubernetes helm - Running helm install in a running pod

I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/