how to use gitlab to share files and folders between projects - kubernetes

I have a question to ask but ill explain my plan/requirement first
I have started on new company
I have been tasked to migrate a lot of microservices running on swarm to Kubernetes
there are about 50 microservices running now
right now we are using consul as key/value store for configuration files
due to a lot of mistakes in designing infrastructure, our swarm is not stable ( failing overlays and so on)
developers want to have sub-versioning on configuration as well but in a specific way
one project for all config files
they don't want to go through building stages
there are some applications that read live configurations (
changes occur regularly )
so I need to centralize the configuration and create a project for this task
I store Kubernetes manifests GitLab-ci files and app configurations there
when I include ci files in the target project I can't access config and Kube manifests ( submodule is not acceptable by developers )
I'm planning to use helm instead of kubectl for deployment
my biggest challenge is to provide the configuration live ( as the developer pushes it applies on cm )
am I on the right track?
any suggestion on how to achieve my goal?
I expect to be able to deploy projects and use multiple files and folders from other projects

create a ci file like this in your devops repo, this job should commit the config file to your repo when config changed.
commit-config-file-to-devops-repo:
script: "command to commit config file to your devops repo"
only:
refs:
- master
changes:
- path/some-config-file.json
- configs/*
change default ci file location to point to ci file in your devops repo
https://192.168.64.188/help/ci/pipelines/settings#custom-cicd-configuration-path
my/path/.my-custom-file.yml#mygroup/another-project
setup pipeline, apply config to k8s when file commited.
Personally I use argocd to sync helm chart to k8s, you can do it your way.
Read live configurations is normally not recommended, because changing config may cause error.
When using k8s, it is better to create configmap and inject config into environment variables
Then use rollout mechanism to restart the app.
Howeven, if you are using configmap volume
It will auto update config file when you change config
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically

Related

Kubernetes - Handle cronjobs like crontab

I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try
I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo

Skaffold config dependencies with profiles

I have a microservice application in one repo that communicates with another service that's managed by another repo.
This is not an issue when deploying to cloud, however, when devving locally the other service needs to be deployed too.
I've read this documentation: https://skaffold.dev/docs/design/config/#remote-config-dependency and this seems like a clean solution, but I only want it to depend on the git skaffold config if deploying locally (i.e. current context is "minikube").
Is there a way to do this?
Profiles can be automatically activated based on criteria such as environment variables, kube-context names, and the Skaffold command being run.
Profiles are processed after resolving the config dependencies though. But you could have your remote config include a profile that is contingent on a kubeContext: minikube.
Another alternative is to have several skaffold.yamls: one for prod, one for dev.

Is there a way to deploy different sets of pods based on context?

For example, I have three apps that I want to deploy, and a database as well. When running on a dev machine (docker-for-desktop context) or in an integration or test cluster, I would want to run 2 replicas of each app, plus have a SQL container that they all connect to. In staging or production, I want to be able to set the replicas as per traffic needs, and I want to connect to a different (external) SQL server.
I would then want those yaml files to be kept in source control, so depending on your env it uses the correct context and "creates" all of the yaml files.
Is this possible with contexts? Or is this a namespace problem? Or do I just need to have yaml files in separate folders (a local folder, a staging folder, a production folder) and have copies of the yaml files in each folder. Or maybe some other option?
There are multiple options to maintain the same set of YAML files for different configurations, for example:
Helm
Kustomize
ksonnet
You can use these tools to keep only per environment configurations in a different files in git.

How to handle multiple environments with Google Cloud Build and Kubernetes

I'successfully set up a CICD pipeline following this tutorial.
It shows clearly how to make Google Cloud Build and Kubernetes work with one environment: production.
For simplicity, this tutorial uses a single environment —production—
in the env repository, but you can extend it to deploy to multiple
environments if needed.
Right, but some details are missing: is there one kubernetes.yaml file by environment? What about kubernetes namespaces?...
More precisely, what would be the way to handle multiple environments (staging...)?
There could be a bizillion ways of doing environments , but what I understand from this line:
env repository: contains the manifests for the Kubernetes Deployment
That the default master/production branch maps to the production environment , then you can create for example testing , and staging branches , where you test and stage your things , and later on port the change to master branch.
Infact if you keep reading that document , it will tell you something:
The env repository can have several branches that each map to a
specific environment (you only use production in this tutorial) and
reference a specific container image, whereas the app repository does
not.
One more thing , if you have access to gitlab and kubernetes , you can implement it without google GKE and clud build.

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.