GITLAB Autoscaling in Kubernetes - kubernetes

I am using GitLab runner in Kubernetes for building our application. Since ours is a docker in a docker use case, we are using Kaniko to build images from a DockerFile.
I am having a hard time figuring out how to implement horizontal/vertical scaling for pods and instances.
Looking at this article https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersmachine-section , it says to use docker+machine image, but
I don't want to have any docker dependency in our build process especially when Kubernetes has deprecated docker support in newer versions.
Any advice?

Related

How to run docker image in Kubernetes (v1.21) pod for CI/CD scenario?

We're investigating using an AKS cluster to run our Azure Pipeline Agents.
I've looked into tools like Kaniko for building docker images, which should work fine, however some of our pipelines run docker commands, e.g. we run checkov using the docker image, but I've struggled to find any solution that seems to work, given the deprecation of the docker shim in kubernetes.
The obvious solution would be add those tools that we currently run from docker into our agent image but this isn't a great solution as it will mean that any time a developer wants to run a tool like that we would need to modify the new image, which is less than ideal.
Any suggestions?
Thanks
You could just use nerdctl or crictl for your commands and even create an alias for that (specially wiht nerdctl) = alias docker="nerdctl"
As docker images or better said container images using the OCI image format you will have no issues running them with containerd or cri-o.

What is the purpose of Docker-compose and Portainer?

I am an engineering newbie who is trying to learn something while experimenting with Raspberry Pi. I wanted to try out Docker, so I followed along with a tutorial, which installed Docker, docker-compose and via that, Portainer.
Now I am a bit lost.
What is the purpose of docker-compose?
What is the purpose of Portainer?
And if I want to add some container like NextCloud/openVPN how do I do that? Thru Portainer? Or docker-compose?
Thanks!
Portainer is just a visualized tool for Docker suite. You can run docker commands everywhere a portainer-agent runs as long as you have access to it and it is not an official Docker product.
Docker-compose from the other is a set of docker commands as part of docker engine (community and enterprise) that will help you to orchestrate containers in a single node (PC or VM). If you want to orchestrate more than a single node, you should read about Docker-Swarm or Kubernetes.
A very nice article to understand swarm vs compose differences is here.
Portainer is just a tool on top of Docker that gives you a UI for free because Native Docker Universal Control Panel is on enterprise edition only.

k8s dashboard ( v2.0.0-betaX )on arm device

I have a 2-raspberry Pi 4 kubernetes Cluster. It uses k3s : (https://github.com/rancher/k3s) which is built on k8s 1.16.
I want to install k8s dashboard (https://github.com/kubernetes/dashboard)
However the last arm-compile image is the v1.10.1 which is not compatible with k8s-1.16 .
Is there a (un)official image of K8s dahsboard v2.0.0-betaX compiled for arm ?
Or someone had any tips on how to compile such an arm image ?
thanks in advance.
According to this article on docker blog docker supports multi platform images since September 2017.
Kubernetes dashboard images are already supporting multiple architectures so you don't need to build it by yourself.
Take a look at the image below.
You can use kubernetesui/dashboard tag and your docker daemon should pull appropriate image for your architecture.
Let me know if it helped.

Install Custom Connector To Kafka Connect on Kubernetes

I'm running the kafka kubenetes helm deployment, however I am unsure about how to install a custom plugin.
When running custom plugin on my local version of kafka I mount the volume /myplugin to the Docker image, and then set the plugin path environment variable.
I am unsure about how to apply this workflow to the helm charts / kubernetes deployment, mainly how to go about mounting the plugin to the Kafka Connect pod such that it can be found in the default plugin.path=/usr/share/java.
Have a look at the last few slides of https://talks.rmoff.net/QZ5nsS/from-zero-to-hero-with-kafka-connect. You can mount your plugins but the best way is to either build a new image to extend the cp-kafka-connect-base, or to install the plugin at runtime - both using Confluent Hub.

Start Kubernetes job with http request

What is most simple way to start Kubernetes job with the http request (webhook)? I need to build docker image after push to github and have to do it inside cluster.
I think you are looking for KNative. Mainly the Build part of it.
KNative is still on early stages, but is pretty much what you need. If the build features does not attend your needs, you can still use the other features like Serving to trigger the container image from http calls and run the tools you need.
Here is the description from the Build Docs:
A Knative Build extends Kubernetes and utilizes existing Kubernetes
primitives to provide you with the ability to run on-cluster container
builds from source. For example, you can write a build that uses
Kubernetes-native resources to obtain your source code from a
repository, build a container image, then run that image.
While Knative builds are optimized for building, testing, and
deploying source code, you are still responsible for developing the
corresponding components that:
Retrieve source code from repositories.
Run multiple sequential jobs against a shared filesystem, for example:
Install dependencies.
Run unit and integration tests.
Build container images.
Push container images to an image registry, or deploy them to a cluster.
The goal of a Knative build is to provide a standard, portable,
reusable, and performance optimized method for defining and running
on-cluster container image builds. By providing the “boring but
difficult” task of running builds on Kubernetes, Knative saves you
from having to independently develop and reproduce these common
Kubernetes-based development processes.