I am trying to understand the Gitlab K8s agent. But it looks like it requires the developer to commit changes to a manifest file before it can deploy them to K8s. This is a challenge when trying to do auto deploys using Gitlab pipelines because those pipelines run after the commit. So how is the user supposed to create a new commit in an automated way that the Gitlab K8s agent can pick up?
I am wondering if anyone is using Gitlab and their K8s agent for auto deploying to K8s. Would really appreciate if you could throw some light on this.
Related
I'm new to Gitlab and Kubernetes and I'm wondering what the difference between a Gitlab runner and a Gitlab agent is.
On gitlab it says an agent is used to connect to the cluster, run pipelines, and deploy applications.
But with a regular runner you could just have a pipeline that invokes kubectl to interact with the cluster.
What is possible with an agent that isn't with a runner using kubectl?
The GitLab Agent (for Kubernetes) is the way GitLab interacts with the Kubernetes cluster (https://docs.gitlab.com/ee/user/clusters/agent/) and is used to allow GitLab to generate GitLab runners which are like Jenkins agents (https://docs.gitlab.com/runner/install/). Consider it like a broker or manager in this case. The agent would spawn the runners inside the cluster with the configuration you have setup.
For example, in my case, I have a node-pool dedicated specifically for gitlab runners. These nodes are more expensive to run, since they're higher-spec than the standard nodes used by the rest of the cluster, so I want to make sure only the GitLab runners spawn on there. I configure the Runner to have a node selector and toleration pointing at that specific node pool so the cluster scales up that node pool to put the runner on it.
The agent itself provides way more functionality than just spawning runners, but your question only asks about the GitLab agent and Runner. You can review the pages I've linked if you would like to find out more.
from docs
GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline.
you should install GitLab Runner on a machine that’s separate from the one that hosts the GitLab instance for security and performance reasons.
so GitLab runner is designed to be installed on a different machine to solve security issues and performance impact on a hosted machine
The GitLab Agent for Kubernetes (“Agent”, for short) is an active in-cluster component for connecting Kubernetes clusters to GitLab safely to support cloud-native deployment, management, and monitoring.
The Agent is installed into the cluster through code, providing you with a fast, safe, stable, and scalable solution.
The GitLab Agent (for Kubernetes) is the way GitLab interacts with the Kubernetes cluster
But that means it needs to be compatible with your GitLab instance.
Fortunately, with GitLab 14.9 (March 2022):
View GitLab agent for Kubernetes version in the UI
If you use the GitLab agent for Kubernetes, you must ensure that the agentk version installed in your cluster is compatible with the GitLab version.
While the compatibility between the GitLab installation and agentk versions is documented, until now it was not very intuitive to figure out compatibility issues.
To support you in your upgrades, GitLab now shows the agentk version installed on the agent listing page and highlights if an agentk upgrade is recommended.
See Documentation and Issue.
Plus, with GitLab 15.2 (July 2022)
API to retrieve agent server (KAS) metadata
After we released the agent for Kubernetes, one of the first requests we got was for an automated way to set it up.
In the past months, Timo implemented a REST API and extended the GitLab Terraform Provider to support managing agents in an automated way.
The current release further improves management of the agent specifically, and of GitLab in general, by introducing a /metadata REST endpoint that is the superset of the /version endpoint.
The /metadata endpoint contains information about the current GitLab version, whether the agent server (KAS) is enabled, and where it can be accessed. This change improves upon the previous functionality, where you had to put the KAS address in your automation scripts manually.
See Documentation and Issue.
My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.
I want to create a cicd pipeline from github to aws eks.
Is there possible to create pipeline from GitHub to AWS EKS deployments on Git actions ?
Yes its possible you need to use some kind of CI/CD tool (Jenkins/Gitlab/AWS Native services) in between to automate this whole process.
Flow would be something like
Developer commit changes --> Trigger CI/CD pipeline --> Build Docker image --> Push it to ECR -- Deploy latest image to EKS using (Kubectl or Helm charts)
Please refer :
https://www.eksworkshop.com/intermediate/260_weave_flux/ this has example for end to end implementation.
https://www.weave.works/blog/gitops-with-github-actions-eks
https://aws.amazon.com/blogs/opensource/git-push-deploy-app-eks-gitkube/
I am using gcp and kubernetes.
I have gcp repository and container registry.
I have a trigger for build container after pushing into the master branch.
I don't know how to set some auto-trigger to deploy new version of the container (docker file).
How can I automate the build process?
You need some extra pieces to do it, for example if you use Helm to package your deployment you can use Flux to trigger the automated deployment.
https://helm.sh/
https://fluxcd.github.io/flux/
There are two solutions here.
You can expand the build step. Cloud Build can also push changes to your GKE cluster. You can read more about this here
What you currently have is a solid CI pipeline, for the CD, you can use Spinnaker for GCP, which was released recently. This integrates well with GCE, GKE and GAE and allows you to automate the CD portion.
I have a private gitlab instance with multiple projects and Gitlab CI enabled. The infrastructure is provided by Google Cloud Platform and Gitlab Pipeline Runner is configured in Kubernetes cluster.
This setup works very well for basic pipelines running tests etc. Now I'd like to start with CD and to do that I need some manual acceptance on the pipeline which means the person reviewing it needs to have the access to the current state of the app.
What I'm thinking is having a kubernetes deployment for the pipeline that would be executed once you try to access it (so we don't waste cluster resources) and would be destroyed once the reviewer accepts the pipeline or after some threshold.
So the deployment would be executed in the same cluster as Gitlab Runner (or different?) and would be accessible by unique URI (we're mostly talking about web-server apps) e.g. https://pipeline-58949526.git.mydomain.com
While in theory, it all makes sense to me, I don't really know how to set this up properly.
Does anyone have a similar setup? Is my view on this topic too simple? Let me know!
Thanks
If you want to see how to automate CI/CD with multiple environments on GKE using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.