Gitlab Kubernetes Agent how to restrict access by namespace or environment - kubernetes

I'm trying to move from certificate based GitLab Kubernetes integration which got deprecated, to new agent based Kubernetes integration.
I use CI/CD workflow, created separate project for Gitlab Kubernetes Agents and registered them there.
The equation is how to restrict the usage of registered agents in other projects?
Previously when one uses certificate based approach, one can set target namespace in project setting, also one can set environment for the integrated cluster, to use it with protected environments.
Now Kubernetes context is just available in other projects under same group, and once you have access to CI\CD files you can do whatever you want, and deploy anywhere.

Related

Can you have multiple Approvals in Azure DevOps Environments

Context
I'm deploying multiple apps using Azure Pipelines to app dedicated namespace in a single AKS cluster.
Problem
Since my ADO Environments is assigned to a single AKS cluster, and when I add Approvals and cheks for that environment, then every Deployment that points to that environment will need approval. This is problematic when you deploy multiple apps per environment because all deployments will be guarded by this policy.
Question
Besides of creating approvals for Environment per app. Is there a way of setting up a granularity of approvals within ADO Environments?
Short Answer
"No". At the moment of writing this answer, an environment in DevOps supports only one approval -maybe it will change in the future.
Solution
There is a workaround for this problem. You can set approvals for individual service connection instead.
When you deploy an application to an individual namespace, a service connection is automatically created for that namespace. It might look like this <aks-cluster-name>-<k8s-namespace>-<long-intiger-id> e.g. my-aks-dev-we-sandbox-1654784698962. You just need to find this service connection in ADO's Project settings, click it, and then click on three dots (located in right upper corner) and choose Approvals and checks. This way, you will be able to control who should do the approvals.

what is the difference between a gitlab runner and a gitlab agent?

I'm new to Gitlab and Kubernetes and I'm wondering what the difference between a Gitlab runner and a Gitlab agent is.
On gitlab it says an agent is used to connect to the cluster, run pipelines, and deploy applications.
But with a regular runner you could just have a pipeline that invokes kubectl to interact with the cluster.
What is possible with an agent that isn't with a runner using kubectl?
The GitLab Agent (for Kubernetes) is the way GitLab interacts with the Kubernetes cluster (https://docs.gitlab.com/ee/user/clusters/agent/) and is used to allow GitLab to generate GitLab runners which are like Jenkins agents (https://docs.gitlab.com/runner/install/). Consider it like a broker or manager in this case. The agent would spawn the runners inside the cluster with the configuration you have setup.
For example, in my case, I have a node-pool dedicated specifically for gitlab runners. These nodes are more expensive to run, since they're higher-spec than the standard nodes used by the rest of the cluster, so I want to make sure only the GitLab runners spawn on there. I configure the Runner to have a node selector and toleration pointing at that specific node pool so the cluster scales up that node pool to put the runner on it.
The agent itself provides way more functionality than just spawning runners, but your question only asks about the GitLab agent and Runner. You can review the pages I've linked if you would like to find out more.
from docs
GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline.
you should install GitLab Runner on a machine that’s separate from the one that hosts the GitLab instance for security and performance reasons.
so GitLab runner is designed to be installed on a different machine to solve security issues and performance impact on a hosted machine
The GitLab Agent for Kubernetes (“Agent”, for short) is an active in-cluster component for connecting Kubernetes clusters to GitLab safely to support cloud-native deployment, management, and monitoring.
The Agent is installed into the cluster through code, providing you with a fast, safe, stable, and scalable solution.
The GitLab Agent (for Kubernetes) is the way GitLab interacts with the Kubernetes cluster
But that means it needs to be compatible with your GitLab instance.
Fortunately, with GitLab 14.9 (March 2022):
View GitLab agent for Kubernetes version in the UI
If you use the GitLab agent for Kubernetes, you must ensure that the agentk version installed in your cluster is compatible with the GitLab version.
While the compatibility between the GitLab installation and agentk versions is documented, until now it was not very intuitive to figure out compatibility issues.
To support you in your upgrades, GitLab now shows the agentk version installed on the agent listing page and highlights if an agentk upgrade is recommended.
See Documentation and Issue.
Plus, with GitLab 15.2 (July 2022)
API to retrieve agent server (KAS) metadata
After we released the agent for Kubernetes, one of the first requests we got was for an automated way to set it up.
In the past months, Timo implemented a REST API and extended the GitLab Terraform Provider to support managing agents in an automated way.
The current release further improves management of the agent specifically, and of GitLab in general, by introducing a /metadata REST endpoint that is the superset of the /version endpoint.
The /metadata endpoint contains information about the current GitLab version, whether the agent server (KAS) is enabled, and where it can be accessed. This change improves upon the previous functionality, where you had to put the KAS address in your automation scripts manually.
See Documentation and Issue.

How to enable Gitlab CI/CD for Private GKE cluster?

I would like to setup the AutoDevops functionality of Gitlab CI/CD, and for that I am trying to setup the existing kubernetes cluster as my environment.
However, the Gitlab requires Kubernetes Master API URL that uses to access the Kubernetes API. Kubernetes
exposes several APIs, we want the "base" URL that is common to all of them,
e.g., https://kubernetes.example.com rather than https://kubernetes.example.com/api/v1.
we will get the API URL by running this command:
kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}
which returns a https://
in my case, I have a private Ip which is https://172.10.1.x
There aren't any documentations to help setup the gitlab CI for a Private GKE cluster.
How can I set the gitlab to access my kubernetes master with the help of a running VM instance or a pod's service IP ? or if there are any solutions/workarounds suggestions to achieve this, please help.
Add Existing GKE cluster as Environment
There is now (Sept. 2020) an alternative, but it was not free (GitLab.com Premium/Ultimate only), is free in part in 14.5+ (Nov. 2021). Then fully with 15.3 (Aug. 2022)
See GitLab 13.4
Introducing the GitLab Kubernetes Agent
GitLab’s Kubernetes integration has long enabled deployment to Kubernetes clusters without manual setup. Many users have enjoyed the ease-of-use, while others have run into some challenges.
The current integration requires your cluster to be open to the Internet for GitLab to access it. For many organizations, this isn’t possible, because they must lock down their cluster access for security, compliance, or regulatory purposes. To work around these restrictions, users needed to create custom tooling on top of GitLab, or they couldn’t use the feature.
Today, we’re announcing the GitLab Kubernetes Agent: a new way to deploy to Kubernetes clusters. The Agent runs inside of your cluster, so you don’t need to open it to the internet. The Agent orchestrates deployments by pulling new changes from GitLab, rather than GitLab pushing updates to the cluster. No matter what method of GitOps you use, GitLab has you covered.
Note this is the first release of the Agent. Currently, the GitLab Kubernetes Agent has a configuration-driven setup, and enables deployment management by code. Some existing Kubernetes integration features, such as Deploy Boards and GitLab Managed Apps, are not yet supported. Our vision is to eventually implement these capabilities, and provide new security- and compliance-focused integrations with the Agent.
https://about.gitlab.com/images/13_4/gitops-header.png -- Introducing the GitLab Kubernetes Agent
See Documentation and Issue.
See also GitLab 13.5 (October 2020)
Install the GitLab Kubernetes Agent with Omnibus GitLab
Last month we introduced the GitLab Kubernetes Agent for self-managed GitLab instances installed with Helm.
This release adds support for the official Linux package.
In this new Kubernetes integration, the Agent orchestrates deployments by pulling new changes from GitLab, rather than GitLab pushing updates to your cluster.
You can learn more about how the Kubernetes Agent works now and check out our vision to see what’s in store.
See Documentation and Issue.
This is confirmed with GitLab 13.11 (April 2021):
GitLab Kubernetes Agent available on GitLab.com
The GitLab Kubernetes Agent is finally available on GitLab.com. By using the Agent, you can benefit from fast, pull-based deployments to your cluster, while GitLab.com manages the necessary server-side components of the Agent.
The GitLab Kubernetes Agent is the core building block of GitLab’s Kubernetes integrations.
The Agent-based integration today supports pull-based deployments and Network Security policy integration and alerts, and will soon receive support for push-based deployments too.
Unlike the legacy, certificate-based Kubernetes integration, the GitLab Kubernetes Agent does not require opening up your cluster towards GitLab and allows fine-tuned RBAC controls around GitLab’s capabilities within your clusters.
See Documentation and issue.
See GitLab 14.5 (November 2021)
GitLab Kubernetes Agent available in GitLab Free
Connecting a Kubernetes cluster with the GitLab Kubernetes Agent simplifies the setup for cluster applications and enables secure GitOps deployments to the cluster.
Initially, the GitLab Kubernetes Agent was available only for Premium users.
In our commitment to the open source ethos, we moved the core features of the GitLab Kubernetes Agent and the CI/CD Tunnel to GitLab Free.
We expect that the open-sourced features are compelling to many users without dedicated infrastructure teams and strong requirements around cluster management.
Advanced features remain available as part of the GitLab Premium offering.
See Documentation and Epic.
See GitLab 14.8 (February 2022)
The agent server for Kubernetes is enabled by default
The first step for using the agent for Kubernetes in self-managed instances is to enable the agent server, a backend service for the agent for Kubernetes. In GitLab 14.7 and earlier, we required a GitLab administrator to enable the agent server manually. As the feature matured in the past months, we are making the agent server enabled by default to simplify setup for GitLab administrators. Besides being enabled by default, the agent server accepts various configuration options to customize it according to your needs.
See Documentation and Issue.
With GitLab 15.3 (August 2022):
GitOps features are now free
When you use GitOps to update a Kubernetes cluster, also called a pull-based deployment, you get an improved security model, better scalability and stability.
The GitLab agent for Kubernetes has supported GitOps workflows from its initial release, but until now, the functionality was available only if you had a GitLab Premium or Ultimate subscription. Now if you have a Free subscription, you also get pull-based deployment support. The features available in GitLab Free should serve small, high-trust teams or be suitable to test the agent before upgrading to a higher tier.
In the future, we plan to add built-in multi-tenant support for Premium subscriptions. This feature would be similar to the impersonation feature already available for the CI/CD workflow.
See Documentation and Issue.
See GitLab 15.4 (September 2022)
Deploy Helm charts with the agent for Kubernetes
You can now use the agent for Kubernetes to deploy
Helm charts to your Kubernetes cluster.
Until now, the agent for Kubernetes only supported vanilla Kubernetes manifest files in its GitOps workflow.
To benefit from the GitOps workflow, Helm users had to use a CI/CD job to render and commit resources.
The current release ships with Alpha support for Helm.
Because Helm is a mature product, we consider the solution performant. However, known issues exist and the API might change without prior notice. We welcome your feedback in the
related epic, where we discuss future improvements and next steps.
See Documentation and Issue.

Creating kubernetes deployment in gitlab pipeline

I have a private gitlab instance with multiple projects and Gitlab CI enabled. The infrastructure is provided by Google Cloud Platform and Gitlab Pipeline Runner is configured in Kubernetes cluster.
This setup works very well for basic pipelines running tests etc. Now I'd like to start with CD and to do that I need some manual acceptance on the pipeline which means the person reviewing it needs to have the access to the current state of the app.
What I'm thinking is having a kubernetes deployment for the pipeline that would be executed once you try to access it (so we don't waste cluster resources) and would be destroyed once the reviewer accepts the pipeline or after some threshold.
So the deployment would be executed in the same cluster as Gitlab Runner (or different?) and would be accessible by unique URI (we're mostly talking about web-server apps) e.g. https://pipeline-58949526.git.mydomain.com
While in theory, it all makes sense to me, I don't really know how to set this up properly.
Does anyone have a similar setup? Is my view on this topic too simple? Let me know!
Thanks
If you want to see how to automate CI/CD with multiple environments on GKE using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.

Service Fabric: Change settings during continuous deployment

I have a SFC that getting deployed to different staging environments. The services have some settings parameters on the settings files. The values of these settings change depending on the staging variables.
I've read this article Manage application parameters for multiple environments but there is not clear what with is meant with Environment. Is it number and type of nodes or the staging env.
How I can change those values from a Release/Build definition? Is there ApplicationParameters transformation just like in Web.config?
Thanks
In service fabric, your Application will have one ApplicationParameter file per environment, and also, one PublishProfile.
Your publish profile will define some deployment configurations, one of these configurations is the ApplicationParameter file.
I'll assume you are using VSTS to deploy your cluster.
You will add a service fabric deployment step, it will require a few settings, one of these is the publish profile path.
To make it dynamic, I'd recommend you to name your PublishProfile the same way you name your environments, and use the environment name to get the publish profile.
Summary:
VSTS Release will run the Service Fabric Deployment Step.
SF Dep. Step will use the environment name to find the publish profile(Example: Environment=Prod -> PublishProfile=Prod.xml)
PublishProfile will point to an application parameter file
The application parameter file will have the settings applicable to that environment(I recommend to use the same naming pattern here Prod.xml, to ease maintenance)
With this configuration, you can use the same release definition to deploy the application into multiple environments, if a new environment is created, the only thing you have to define is the PublishProfile and ApplicationParamenter files.