I have two projects:
Project A - Contains the Source code for my Microservice application
Project B - Contains the Kubernetes resources for Project A using Helm
Both the Projects reside in their own separate git repositories.
Project A builds using a full blown CI pipeline that build a Docker image with a tag version and pushes it into the Docker hub and then writes the version number for the Docker image into Project B via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
So far so good! I now have Project B which contains this Docker version for the Microservice Project A and I now want to pass / inject this value into the Values.yaml so that when I package the Project B via Helm, I have the latest version.
Any ideas how I could get this implemented?
via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
What I usually do here, is that I write the value to the correct field in the yaml directly. To work with yaml on the command line, I recommend the cli tool yq.
I usually use full Kubernetes Deployment manifest yaml and I typically update the image field with this yq command:
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==myapp).image' <my-registry>/<my-image-repo>/<image-name>:<tag-name>
and after that commit the yaml file to the repo with yaml manifests.
Now, you use Helm but it is still Yaml, so you should be able to solve this in a similar way. Maybe something like:
yq write --inplace values.yaml 'app.image' <my-image>
Related
I have a git repo with helm charts, and a git repo per environment with kustomizations for those charts. All of this in working great with flux. But I don't know how to "preview" my changes to the kustomization files.
Let's say I edit the dev environment kustomization, can I preview what the final yaml file will look like ?
You can have a look at 2 different commands:
flux diff - "The diff command does a build, then it performs a server-side dry-run and prints the diff."
kustomize build - "It recursively builds (aka hydrates) the kustomization.yaml you point it to, resulting in a set of Kubernetes resources ready to be deployed."
As Flux Kustomization only points to a standard kustomize file, you can use the kustomize build to see the manifests.
PS: For helm, check out helm template
I have a question to ask but ill explain my plan/requirement first
I have started on new company
I have been tasked to migrate a lot of microservices running on swarm to Kubernetes
there are about 50 microservices running now
right now we are using consul as key/value store for configuration files
due to a lot of mistakes in designing infrastructure, our swarm is not stable ( failing overlays and so on)
developers want to have sub-versioning on configuration as well but in a specific way
one project for all config files
they don't want to go through building stages
there are some applications that read live configurations (
changes occur regularly )
so I need to centralize the configuration and create a project for this task
I store Kubernetes manifests GitLab-ci files and app configurations there
when I include ci files in the target project I can't access config and Kube manifests ( submodule is not acceptable by developers )
I'm planning to use helm instead of kubectl for deployment
my biggest challenge is to provide the configuration live ( as the developer pushes it applies on cm )
am I on the right track?
any suggestion on how to achieve my goal?
I expect to be able to deploy projects and use multiple files and folders from other projects
create a ci file like this in your devops repo, this job should commit the config file to your repo when config changed.
commit-config-file-to-devops-repo:
script: "command to commit config file to your devops repo"
only:
refs:
- master
changes:
- path/some-config-file.json
- configs/*
change default ci file location to point to ci file in your devops repo
https://192.168.64.188/help/ci/pipelines/settings#custom-cicd-configuration-path
my/path/.my-custom-file.yml#mygroup/another-project
setup pipeline, apply config to k8s when file commited.
Personally I use argocd to sync helm chart to k8s, you can do it your way.
Read live configurations is normally not recommended, because changing config may cause error.
When using k8s, it is better to create configmap and inject config into environment variables
Then use rollout mechanism to restart the app.
Howeven, if you are using configmap volume
It will auto update config file when you change config
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically
I am setting up Anthos config sync for my GitHub repositories for application deployment.
I am looking for a git strategy for the following problem:
I built the docker image based on the Pull Request merge on the git repository.
The image version is to be updated in the policy folder for anthos config, where I have either helm or kustomize.
Currently, helm and kustomize are with the same source code repository. So whenever I push the code docker build happens. So, how to update the docker image version in the helm or kustomize?
I have a reservation, I don't want to use the latest tag.
So does that mean:
Do I have to separate the git repository for helm / kustomize deployment from the source code?
And I have to use Github API to commit the file in that git repository for helm / kustomize?
If I use a custom pre-commit hook then also, it does not guarantee it fires to update the git commit hash to docker image and Kubernetes manifests.
Do you have some suggestions, for the git strategy to handle the situation?
I'm think I'm about to reinvent the wheel here. I have all the parts but am thinking: Somebody must have done this (properly) before me.
We have a a jenkins CI job that builds image-name:${BRANCH_NAME} and pushes it to a registry. We want to create a CD job that deploys this image-name:${BRANCH_NAME} to kubernetes cluster. And so now we run into the problem that if we call helm upgrade --install with the same image-name:${BRANCH_NAME} nothing happens, even if image-name:${BRANCH_NAME} now actually refers to a different sha256 sum. We (think we) understand this.
How is this generally solved? Are there best practices about this? I see two general approaches:
The CI job doesn't just create image-name:${BRANCH_NAME}, it also creates a unique tag, e.g. image-name:${BRANCH_NAME}-${BUILD_NUMBER}. The CD job never deploys the generic image-name:${BRANCH_NAME}, but always the unique image-name:${BRANCH_NAME}-${BUILD_NUMBER}.
After the CI job has created image-name:${BRANCH_NAME}, its SHA256 sum is retrieved somehow (e.g. with docker inspect or skopeo and helm is called with the SHA256 sum.
In both cases, we have two choices. Modify, commit and track a custom-image-tags.yaml file, or run helm with --set parameters for the image tags. If we go with option 1, we'll have to periodically remove "old tags" to save disk space.
And if we have a single CD job with a single helm chart that contains multiple images, this only gets more complicated.
Surely, there must be some opinionated tooling to do all this for us.
What are the ways to do this without re-inventing this particular wheel for the 4598734th time?
kbld gets me some of the way, but breaks helm
I've found kbld, which allows me to:
helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
which basically implements 2 above, but now helm is unaware that the chart has been installed so I can't helm uninstall it. :-( I'm hoping there is some better approach...
kbld can also be used "fully" with helm...
Yes, the docs suggest:
$ helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
But this also works:
$ cat kbld-stdin.sh
#!/bin/bash
kbld -f -
$ helm upgrade --install my-chart --values my-vals.yml --post-renderer ./kbld-stdin.sh
With --post-renderer, helm list, helm uninstall, etc. all still work.
One approach is that every build of Jenkins CI Job should create docker image with new semantic versioned image tag.
To generate the image tag, you need to tag every git commit with a semantic version which is an increment of the previous commit tag.
For Example :
Your first commit in a git repository master branch will be tagged as 0.0.1 and your docker image tag will be 0.0.1
Then when the CI build runs for the next git commit in master branch, that git commit in the git repository will be tagged as 0.0.2 and your docker image tag will be 0.0.2
Since you have single helm chart for multiple images, your CI Build can then download the latest version of your helm chart, change the docker image tag and upload the helm chart with same helm version.
If you create a new git release branch, then it should be tagged with 0.1.0 and docker image created for this new git release branch should be tagged as 0.1.0
You can use this tag in the Maven pom.xml for Java Applications as well.
Using the docker image tag, developers can checkout the correpsonding git tag to find what is the source code corresponding to that docker image tag. It will help them with debugging and also for providing fixes.
Please also read https://medium.com/#mccode/using-semantic-versioning-for-docker-image-tags-dfde8be06699
So, what I'm trying to do is use helm to install an application to my kubernetes cluster. Let's say the image tag is 1.0.0 in the chart.
Then, as part of a CI/CD build pipeline, I'd like to update the image tag using kubectl, i.e. kubectl set image deployment/myapp...
The problem is if I subsequently make any change to the helm chart (e.g. number of replicas), and I helm upgrade myapp this will revert the image tag back to 1.0.0.
I've tried passing in the --reuse-values flag to the helm upgrade command but that hasn't helped.
Anyone have any ideas? Do I need to use helm to update the image tag? I'm trying to avoid this, as the chart is not available at this stage in the pipeline.
When using CI/CD to build and deploy, you should use a single source-of-truth, that means a file versioned in e.g. Git and you do all changes in that file. So if you use Helm charts, they should be stored in e.g. Git and all changes (e.g. new image) should be done in your Git repository.
You could have a build pipeline that in the end commit the new image to a Kubernetes config repository. Then a deployment pipeline is triggered that use Helm or Kustomize to apply your changes and possibly execute tests.