Argo cd with eks fargate - kubernetes

Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in pending state
I’ve tried installing on argocd namespace and existing ones. Still doesn’t work
I tried to install it using the commands below:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml

Make sure you have created a fargate profile with the namespace selector as argocd. It might be one of the issues.
refer this https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile

Related

ArgoCD deployment to eks and aks

Is there any way that ArgoCD deploy to AKS and EKS cluster simultaneously. I don't see anything setting in ArgoCD to connect to another cluster. My aim is that I want ArgoCD to deploy in both AKS and EKS. As of now since ArgoCD is deployed to EKS so by default its picking it up but I want to connect ArgoCD with AKS as well. If there is a way please tell me.
Yes, you can deploy to multiple clusters or external clusters using the Argo CD.
please check this out : https://blog.doit-intl.com/automating-kubernetes-multi-cluster-config-with-argo-cd-5ac5e371ef01
if your argo CD is running local on same host
you can check the existing clusters using the
kubectl config get-contexts
and using cluster context Name you can add the context to the Argo CD via agro cli
argocd cluster add RESPECTIVE-CONTEXT name
https://argoproj.github.io/argo-cd/user-guide/commands/argocd_cluster_add/
readmore at : https://itnext.io/argocd-setup-external-clusters-by-name-d3d58a53acb0

Argo CD pod can't pull the image

I'm running a hybrid AKS cluster where I have one Linux and one Windows node. The Windows node will run a legacy server app. I want to use Argo CD to simplify the deployments.
After following the installation instructions (installation manifest) and installing Argo in the cluster I noticed I can't connect to its dashboard.
Troubleshooting the issue I found that the Argo pod can't pull the image. Below output of kubectl describe pod argocd-server-75b6967787-xfccz -n argocd
Another thing that is visible here is that Argo pod got assigned to a windows node. From what I found here, Argo can't run on windows nodes. I think that's the root cause of the problems.
Does anyone know how can I force Argo pods to run on Linux node?
I found that something like nodeSelector could be useful.
nodeSelector:
kubernetes.io/os: linux
But how could I apply the nodeSelector on the already deployed Argo?
Does anyone know how can I force Argo pods to run on Linux node?
check for the other node label and change the node selector
nodeSelector:
kubernetes.io/os: linux
you can directly edit the current deployment of argo CD
kubectl edit deployment argocd-server -n <namespace name>
you can edit the label directly using cli and update it
I added nodeSelector to the pod templates of argocd-server, argocd-repo-server and argocd-application-controller workloads (former two are Deployment, the latter is a StatefulSet).
After that I redeployed the manifests. The images got pulled and I was able to access the Argo dashboard.

Install Istio in multi master nodes in kubernetes

I read about Istio and I need to install it in Kubernetes.
I don't know what is the best way to install Istio in a multi-node Kubernetes cluster.
The setup is multi-node master cluster and multi-node slave for Kubernetes.
Is the best way to install with Istio multicluster or sidecar injection (automatic)?
Regards.
There is no difference on how many Master and Slave Nodes your Kubernetes cluster has if you want to install Istio.
You can follow the instructions from this link
Briefly, you need to:
Download Istio release
Install Istio’s Custom Resource Definitions using kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml from that release
Install Istio components using one of options:
without mutual TLS authentication between sidecars using kubectl apply -f install/kubernetes/istio-demo.yaml
with default mutual TLS authentication kubectl apply -f install/kubernetes/istio-demo-auth.yaml
Render Kubernetes manifest with Helm and deploy with kubectl
Use Helm and Tiller to manage the Istio deployment
For auto injection, you need to install istio-sidecar-injector component and add istio-injection=enabled label for a Namespace in which you want it to work.
Example of commands:
kubectl label namespace <namespace> istio-injection=enabled
kubectl create -n <namespace> -f <your-app-spec>.yaml

Deploying Images from gitlab in a new namespace in Kubernetes

I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.

Update deployment fails when same name exists in separate namespaces

I've used the following command to update the image run in a deployment:
kubectl --cluster websites --namespace production set image
deployment/mobile-web mobile-web=eu.gcr.io/websites/mobile-web:0.23
This worked well until I created a staging namespace mirroring the production environment. In other words the deployment mobile-web exists both in the production and staging namespace. Now I get the error:
Error from server: the server could not find the requested resource
(get deployments.extensions mobile-web)
What am I missing here? Or is the only way to update using a yaml- or JSON-file, which means a bit more work on the CI/CD pipeline? I've tried setting the namespace with:
kubectl config set-context production --namespace=production --cluster=websites
but to no avail.
The solution for my concern was to kill the current proxy and get new credentials and start the proxy again:
gcloud container clusters get-credentials websites
kubectl proxy --port=8080
Now either commands work as expected:
kubectl get deployment mobile-web --namespace=production
kubectl get deployment mobile-web --namespace=staging
However it doesn't explain why it stopped working in the first place.