Resolving variables in remote ArgoCD applications - kubernetes-helm

I am using some ArgoCD applications and Helm charts that reside on a GitHub repo. Everytime I need to deploy them I need to clone, populate the values, push and trigger ArgoCD by applying the root application.
The root application then has a reference to other helm and argo applications.
My question: is there a way to populated paramaters or environment variables in ArgoCD so it takes care of substituting them inside helm charts and applications?
What is a better way than cloning, populating variables, pushing and triggering the argocd app?

Related

Fix RepeatedResourceWarning for Helm Chart deployed by ArgoCD

I have ArgoCD deployed into a K8S cluster which deploys multiple applications based ony Helm charts into my cluster. Each application is stored within a dedicated Git repo and uses a Chart.yaml file to specify a dependency (e.g. the ArgoCD helm chart). In addition I want to customise this Helm chart by overwriting certain K8S resources (e.g. config maps) by defining the corresponding file within a templates directory inside the same repo where the Chart.yaml is stored. I know that I could use the values.yaml file to make these changes, however I would like to have the file within the templates folder. After syncing the application to my cluster I receive a "RepeatedResourceWarning" message.
How can I tell ArgoCD to prioritise always the configuration within the templates folder?

Does helmfile sync will redeploy all existing helm charts

I have a few services running on a kubernetes cluster, and I use Helm Chart where I placed all my services. However, I was asked to transfer Helm Charts into Helmfile.
If I use
helmfile import myrepo/mychart
helmfile sync
Will it redeploy and substitute existing running pods or It will deploy just deploy additional services mentioned
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Helmfile is a declarative spec for deploying helm charts. It lets you...
Keep a directory of chart value files and maintain changes in version control.
Apply CI/CD to configuration changes.
Periodically sync to avoid skew in environments.
To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.
Like #DavidMaze suggested, use helm diff command first to determine the changes and then use helm sync command for applying them.

How to get/download configured/updated values.yaml file of deployed application in ArgoCD dashbaord?

I have deployed an application in ArgoCD using open source helm charts(created application using using argo UI dashboard).
I have modified/added some parameters in ArgoCD UI.
I need the values.yaml file with updated data so that I can use it to deploy locally or anywhere using helm command.
The one way is to see the ArgoCD UI, and copy each parameter manually which will take lot of time since I have lot of parameters set.
Is there any easy way to download the values.yaml in argo UI or using argo CLI?

How can I use Gitlab's Container Registry for Helm Charts with ArgoCDs CI/CD Mechanism?

My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.

How to use a Helm template in multiple repositories?

I have several microservices that have practically the same settings in YAML, some values change (e.g. image, version, a specific environment variable ...), and they are in different repositories, with a different pipeline each. How do I use the same template between them without getting repeated code?
This is how we do it in the place I currently work.
We have our own generic Helm chart that is version controlled and hosted in our Artifactory, every parameter in that chart that may need changing is exposed in values.yaml.
The Artifactory gets added to helm as repository, then you only need separate a values.yaml for each microservice you want deployed as the chart gets sourced centrally.
helm install -f values.yaml microservice01 artifcatory/global-helm-chart
On top of that we use helmfile, but this is not necessary in order to achieve your goal.
The key points are:
make the chart generic
host it centrally
add the repository to helm.
You can also update the values.yaml from pipeline and then package the chart and deploy it..In that way you can still have the same yaml file but the values will differ from which pipeline they are deployed.
Alternatively and easy way will be to maintain different values.yaml for different environment in the helm chart itself and pass them during helm install/upgrade from pipeline.
We do it for about 90 microservices. We have common Chart and we run values file through kindof sed script which changes what we need. Then whole package gets deployed