I'm running a hybrid AKS cluster where I have one Linux and one Windows node. The Windows node will run a legacy server app. I want to use Argo CD to simplify the deployments.
After following the installation instructions (installation manifest) and installing Argo in the cluster I noticed I can't connect to its dashboard.
Troubleshooting the issue I found that the Argo pod can't pull the image. Below output of kubectl describe pod argocd-server-75b6967787-xfccz -n argocd
Another thing that is visible here is that Argo pod got assigned to a windows node. From what I found here, Argo can't run on windows nodes. I think that's the root cause of the problems.
Does anyone know how can I force Argo pods to run on Linux node?
I found that something like nodeSelector could be useful.
nodeSelector:
kubernetes.io/os: linux
But how could I apply the nodeSelector on the already deployed Argo?
Does anyone know how can I force Argo pods to run on Linux node?
check for the other node label and change the node selector
nodeSelector:
kubernetes.io/os: linux
you can directly edit the current deployment of argo CD
kubectl edit deployment argocd-server -n <namespace name>
you can edit the label directly using cli and update it
I added nodeSelector to the pod templates of argocd-server, argocd-repo-server and argocd-application-controller workloads (former two are Deployment, the latter is a StatefulSet).
After that I redeployed the manifests. The images got pulled and I was able to access the Argo dashboard.
Related
Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in pending state
I’ve tried installing on argocd namespace and existing ones. Still doesn’t work
I tried to install it using the commands below:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
Make sure you have created a fargate profile with the namespace selector as argocd. It might be one of the issues.
refer this https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile
I have installed krew and installed rabbitmq-plugin using the same. Using the kubectl rabbitmq -n create instance --image=custom-image:v1 command created a rabbitmq stateful set in my google kubernetes engine cluster.
The deployment was successful, but now when I try to update the stateful set with new image custom-image:v2, it is not getting rolled out.
Can someone help me here ?
Thanks & Regards,
Robin
Normally if you check the statefulSet events you will get a hint on what's going on wrongly. Usually, if the previous version was running, v2 is not reachable or can't be deployed.
kubectl describe statefulset <statefulSet-name> -n <namespace>
I have a single node kubernetes setup on Ubuntu 20.04. Am using microk8s and longhorn storage for my single node cluster. I install packages using Helm via Lens IDE. I have configured everything as per the respective guides but anytime I install a package that requires persistence eg Mariadb or Wordpress, the following happens:
pv and pvc get created and Bound successfully
pod does not successfully create and throws the error below
MountVolume.SetUp failed for volume "pvc-fdada93c-c4af-4916-942f-abf9897feaf9" : applyFSGroup failed for vol pvc-fdada93c-c4af-4916-942f-abf9897feaf9: lstat /var/snap/microk8s/common/var/lib/kubelet/pods/f69173e1-cd98-4f86-9e52-edf62fa723da/volumes/kubernetes.io~csi/pvc-fdada93c-c4af-4916-942f-abf9897feaf9/mount: no such file or directory
when I manually create a directory using the command below, the pod will successfully start
mkdir -p /var/snap/microk8s/common/var/lib/kubelet/pods/f69173e1-cd98-4f86-9e52-edf62fa723da/volumes/kubernetes.io~csi/pvc-fdada93c-c4af-4916-942f-abf9897feaf9/mount
the issue will then repeat if I do server reboot
Question: How can I get the pods to automatically mount when I install a package from Helm. I have seen this happen on similar single node clusters using the same software.
NOTE: nfs-common and open-iscsi are both running
I was able to figure out the issue.
The issue was actually not due to Longhorn itself. It was due to CoreDNS.
Due to firewall restrictions, CoreDNS could not resolve internal kubernetes DNS, especially longhorn-backend
Provided the UI and Driver could not reach longhorn-backend, they could never start. Fixing CoreDNS issues fixed caused the longhorn services to work well and my PVCs and PVs also worked as expected.
Steps to resolve were as follow
Check the coredns pod for errors
kubectl logs coredns-7f9c69c78c-7dsjg -n kube-system
Any output other than simply the coredns version means you need to resolve the errors shown
For me it was done by disabling firewalls and adding 8.8.8.8 in my Node's /etc/resolv.conf file
Once resolved, you can ether wait a minute for coredns to resolve internal DNS or restart it with the command below
kubectl rollout restart deployment/coredns -n kube-system
Everything worked well after that!
I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.
I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/