Error while installing GitLab Runner into the GitLab Project - kubernetes

enter image description here
Operation failed. Check pod logs for install-runner for more details.
I am getting this error while trying to install GitLab runner.
What I have done so far
successfully installed Kubernetes cluster
created a demo project in Gitlab
provided details to GitLab for Kubernetes cluster
Then while trying to installing runner it shows failure.
What am I missing here? [please check the attached image]

I had was facing the same issue, In my case it was because i had not set RBAC-enabled cluster to true. I deleted the intergration and checked RBAC-enabled cluster when i re-integrated and it worked.
Runner logs:
kubectl logs install-runner -n gitlab-managed-apps
Error: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot list resource "secrets" in API group "" in the namespace "gitlab-managed-apps"
Reference:
gitlab issue

Warning, with GitLab 13.11 (April 2021):
One-click GitLab Managed Apps will be removed in GitLab 14.0
We are deprecating one-click install of GitLab Managed Apps.
Although they made it very easy to get started with deploying to Kubernetes from GitLab, the overarching community feedback was that they were not flexible or customizable enough for real-world Kubernetes applications.
Instead, our future direction will focus on installing apps on Kubernetes via GitLab CI/CD in order to provide a better balance between ease-of-use and expansive customization.
We plan to remove one-click Managed Apps completely in GitLab version 14.0.
This will not affect how existing managed applications run inside your cluster, however, you’ll no longer have the ability to modify those applications via the GitLab UI.
We recommend cluster administrators plan to migrate any existing managed applications by reinstalling them either manually or via CI/CD. Migration instructions will be available in our documentation later.
For users of alerts on managed Prometheus, in GitLab version 14.0, we will also remove the ability to setup/modify alerts from the GitLab UI. This change is necessary because the existing solution will no longer function once managed Prometheus is removed.
Deprecation date: May 22, 2021

Related

Use existing resources for a new Kustomize installation? (kubeflow)

I am trying to install kubeflow pipelines (KFP) for kubeflow on AWS, as shown here. I am using an overlay for some simple labeling and other cosmetic changes. Installing KFP in the way shown in the documentation will also deploy instances of argo and other necessary services. I already have an instance of argo running on the cluster, so how can I point KFP at that installation of argo instead of deploying a duplicate instance?

what is the difference between a gitlab runner and a gitlab agent?

I'm new to Gitlab and Kubernetes and I'm wondering what the difference between a Gitlab runner and a Gitlab agent is.
On gitlab it says an agent is used to connect to the cluster, run pipelines, and deploy applications.
But with a regular runner you could just have a pipeline that invokes kubectl to interact with the cluster.
What is possible with an agent that isn't with a runner using kubectl?
The GitLab Agent (for Kubernetes) is the way GitLab interacts with the Kubernetes cluster (https://docs.gitlab.com/ee/user/clusters/agent/) and is used to allow GitLab to generate GitLab runners which are like Jenkins agents (https://docs.gitlab.com/runner/install/). Consider it like a broker or manager in this case. The agent would spawn the runners inside the cluster with the configuration you have setup.
For example, in my case, I have a node-pool dedicated specifically for gitlab runners. These nodes are more expensive to run, since they're higher-spec than the standard nodes used by the rest of the cluster, so I want to make sure only the GitLab runners spawn on there. I configure the Runner to have a node selector and toleration pointing at that specific node pool so the cluster scales up that node pool to put the runner on it.
The agent itself provides way more functionality than just spawning runners, but your question only asks about the GitLab agent and Runner. You can review the pages I've linked if you would like to find out more.
from docs
GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline.
you should install GitLab Runner on a machine that’s separate from the one that hosts the GitLab instance for security and performance reasons.
so GitLab runner is designed to be installed on a different machine to solve security issues and performance impact on a hosted machine
The GitLab Agent for Kubernetes (“Agent”, for short) is an active in-cluster component for connecting Kubernetes clusters to GitLab safely to support cloud-native deployment, management, and monitoring.
The Agent is installed into the cluster through code, providing you with a fast, safe, stable, and scalable solution.
The GitLab Agent (for Kubernetes) is the way GitLab interacts with the Kubernetes cluster
But that means it needs to be compatible with your GitLab instance.
Fortunately, with GitLab 14.9 (March 2022):
View GitLab agent for Kubernetes version in the UI
If you use the GitLab agent for Kubernetes, you must ensure that the agentk version installed in your cluster is compatible with the GitLab version.
While the compatibility between the GitLab installation and agentk versions is documented, until now it was not very intuitive to figure out compatibility issues.
To support you in your upgrades, GitLab now shows the agentk version installed on the agent listing page and highlights if an agentk upgrade is recommended.
See Documentation and Issue.
Plus, with GitLab 15.2 (July 2022)
API to retrieve agent server (KAS) metadata
After we released the agent for Kubernetes, one of the first requests we got was for an automated way to set it up.
In the past months, Timo implemented a REST API and extended the GitLab Terraform Provider to support managing agents in an automated way.
The current release further improves management of the agent specifically, and of GitLab in general, by introducing a /metadata REST endpoint that is the superset of the /version endpoint.
The /metadata endpoint contains information about the current GitLab version, whether the agent server (KAS) is enabled, and where it can be accessed. This change improves upon the previous functionality, where you had to put the KAS address in your automation scripts manually.
See Documentation and Issue.

How can I use Gitlab's Container Registry for Helm Charts with ArgoCDs CI/CD Mechanism?

My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.

How to enable Gitlab CI/CD for Private GKE cluster?

I would like to setup the AutoDevops functionality of Gitlab CI/CD, and for that I am trying to setup the existing kubernetes cluster as my environment.
However, the Gitlab requires Kubernetes Master API URL that uses to access the Kubernetes API. Kubernetes
exposes several APIs, we want the "base" URL that is common to all of them,
e.g., https://kubernetes.example.com rather than https://kubernetes.example.com/api/v1.
we will get the API URL by running this command:
kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}
which returns a https://
in my case, I have a private Ip which is https://172.10.1.x
There aren't any documentations to help setup the gitlab CI for a Private GKE cluster.
How can I set the gitlab to access my kubernetes master with the help of a running VM instance or a pod's service IP ? or if there are any solutions/workarounds suggestions to achieve this, please help.
Add Existing GKE cluster as Environment
There is now (Sept. 2020) an alternative, but it was not free (GitLab.com Premium/Ultimate only), is free in part in 14.5+ (Nov. 2021). Then fully with 15.3 (Aug. 2022)
See GitLab 13.4
Introducing the GitLab Kubernetes Agent
GitLab’s Kubernetes integration has long enabled deployment to Kubernetes clusters without manual setup. Many users have enjoyed the ease-of-use, while others have run into some challenges.
The current integration requires your cluster to be open to the Internet for GitLab to access it. For many organizations, this isn’t possible, because they must lock down their cluster access for security, compliance, or regulatory purposes. To work around these restrictions, users needed to create custom tooling on top of GitLab, or they couldn’t use the feature.
Today, we’re announcing the GitLab Kubernetes Agent: a new way to deploy to Kubernetes clusters. The Agent runs inside of your cluster, so you don’t need to open it to the internet. The Agent orchestrates deployments by pulling new changes from GitLab, rather than GitLab pushing updates to the cluster. No matter what method of GitOps you use, GitLab has you covered.
Note this is the first release of the Agent. Currently, the GitLab Kubernetes Agent has a configuration-driven setup, and enables deployment management by code. Some existing Kubernetes integration features, such as Deploy Boards and GitLab Managed Apps, are not yet supported. Our vision is to eventually implement these capabilities, and provide new security- and compliance-focused integrations with the Agent.
https://about.gitlab.com/images/13_4/gitops-header.png -- Introducing the GitLab Kubernetes Agent
See Documentation and Issue.
See also GitLab 13.5 (October 2020)
Install the GitLab Kubernetes Agent with Omnibus GitLab
Last month we introduced the GitLab Kubernetes Agent for self-managed GitLab instances installed with Helm.
This release adds support for the official Linux package.
In this new Kubernetes integration, the Agent orchestrates deployments by pulling new changes from GitLab, rather than GitLab pushing updates to your cluster.
You can learn more about how the Kubernetes Agent works now and check out our vision to see what’s in store.
See Documentation and Issue.
This is confirmed with GitLab 13.11 (April 2021):
GitLab Kubernetes Agent available on GitLab.com
The GitLab Kubernetes Agent is finally available on GitLab.com. By using the Agent, you can benefit from fast, pull-based deployments to your cluster, while GitLab.com manages the necessary server-side components of the Agent.
The GitLab Kubernetes Agent is the core building block of GitLab’s Kubernetes integrations.
The Agent-based integration today supports pull-based deployments and Network Security policy integration and alerts, and will soon receive support for push-based deployments too.
Unlike the legacy, certificate-based Kubernetes integration, the GitLab Kubernetes Agent does not require opening up your cluster towards GitLab and allows fine-tuned RBAC controls around GitLab’s capabilities within your clusters.
See Documentation and issue.
See GitLab 14.5 (November 2021)
GitLab Kubernetes Agent available in GitLab Free
Connecting a Kubernetes cluster with the GitLab Kubernetes Agent simplifies the setup for cluster applications and enables secure GitOps deployments to the cluster.
Initially, the GitLab Kubernetes Agent was available only for Premium users.
In our commitment to the open source ethos, we moved the core features of the GitLab Kubernetes Agent and the CI/CD Tunnel to GitLab Free.
We expect that the open-sourced features are compelling to many users without dedicated infrastructure teams and strong requirements around cluster management.
Advanced features remain available as part of the GitLab Premium offering.
See Documentation and Epic.
See GitLab 14.8 (February 2022)
The agent server for Kubernetes is enabled by default
The first step for using the agent for Kubernetes in self-managed instances is to enable the agent server, a backend service for the agent for Kubernetes. In GitLab 14.7 and earlier, we required a GitLab administrator to enable the agent server manually. As the feature matured in the past months, we are making the agent server enabled by default to simplify setup for GitLab administrators. Besides being enabled by default, the agent server accepts various configuration options to customize it according to your needs.
See Documentation and Issue.
With GitLab 15.3 (August 2022):
GitOps features are now free
When you use GitOps to update a Kubernetes cluster, also called a pull-based deployment, you get an improved security model, better scalability and stability.
The GitLab agent for Kubernetes has supported GitOps workflows from its initial release, but until now, the functionality was available only if you had a GitLab Premium or Ultimate subscription. Now if you have a Free subscription, you also get pull-based deployment support. The features available in GitLab Free should serve small, high-trust teams or be suitable to test the agent before upgrading to a higher tier.
In the future, we plan to add built-in multi-tenant support for Premium subscriptions. This feature would be similar to the impersonation feature already available for the CI/CD workflow.
See Documentation and Issue.
See GitLab 15.4 (September 2022)
Deploy Helm charts with the agent for Kubernetes
You can now use the agent for Kubernetes to deploy
Helm charts to your Kubernetes cluster.
Until now, the agent for Kubernetes only supported vanilla Kubernetes manifest files in its GitOps workflow.
To benefit from the GitOps workflow, Helm users had to use a CI/CD job to render and commit resources.
The current release ships with Alpha support for Helm.
Because Helm is a mature product, we consider the solution performant. However, known issues exist and the API might change without prior notice. We welcome your feedback in the
related epic, where we discuss future improvements and next steps.
See Documentation and Issue.

Docker image deployment tool for Kubernetes

In my organization we use IBM Urban code to deploy docker images to Kubernetes. Deploying using Urban code is not easy and process is not transparent. Sometimes output of UC is confusing to release management. Are there any better tools used by the industry to deploy docker application in kubernetes, docker EE platform?
I can share how we are doing it in our start-up.
We've built our own pipeline around Jenkins and Google Kubernetes Engine. There are not that many steps involved:
Create a tag of your built image(s): docker tag <source_image> <target_image>
Push image(s) to the Google Container Registry: gcloud docker -- push <target_image>
Change yaml file definitions to select new <target_image>
Update K8s configuration: kubectl apply -f <yaml_file>
Of course in real life this is a little more complex and automatically updates tons of microservices but you get the gist.
Because you asked for tools, there are lots of solutions out there to help you, please have a look at this list to get an overview. It all pretty much depends what kind of environment you want to use it in. Some prominent examples are:
Werker
Codefresh
Spinnaker
KubeCI
You can use below tools for deploying docker apps to kubernetes
Jenkins with kubernetes CD plugin
https://github.com/jenkinsci/kubernetes-cd-plugin
Spinnaker