Why does buildah fail running inside a kubernetes container? - kubernetes

Hey I'm creating a Gitlab pipeline and I have a runner in Kubernetes.
In my pipeline I am trying to build the application as container.
I'm building the container with buildah, which is running inside a Kubernetes pod. While the pipeline is running kubectl get pods --all-namespaces shows the buildah pod:
NAMESPACE NAME READY STATUS RESTARTS AGE
gitlab-runner runner-wyplq6-h-project-6157-concurrent-0qc9ns 2/2 Running 0 7s
The pipeline runs
buildah login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY} and
buildah bud -t ${CI_REGISTRY_IMAGE}/${CI_COMMIT_BRANCH}:${CI_COMMIT_SHA} .
with the Dockerfile using FROM parity/parity:v2.5.13-stable.
Buldah bud however fails, and prints:
Login Succeeded!
STEP 1: FROM parity/parity:v2.5.13-stable
Getting image source signatures
Copying blob sha256:d1983a67e104e801fceb1850a375a71fe6b62636ba7a8403d9644f308a6a43f9
Copying blob sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83
Copying blob sha256:49ac0bbe6c8eeb959337b336ceaa5c3bbbae81e316025f9b94ede453540f2377
Copying blob sha256:72d77d7d5e84353d77d8a8f97d250120afe3650b85010137961560bce3a327d5
Copying blob sha256:1a0f3a523f04f61db942018321ae122f90d8e3303e243b005e8de9817daf7028
Copying blob sha256:4aae9d2bd9a7a79a688ccf753f0fa9bed5ae66ab16041380e595a077e1772b25
Copying blob sha256:8326361ddc6b9703a60c5675d1e9cc4b05dbe17473f8562c51b78a1f6507d838
Copying blob sha256:92c90097dde63c8b1a68710dc31fb8b9256388ee291d487299221dae16070c4a
Copying config sha256:36be05aeb6426b5615e2d6b71c9590dbc4a4d03ae7bcfa53edefdaeef28d3f41
Writing manifest to image destination
Storing signatures
time="2022-02-08T10:40:15Z" level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: stderr: permission denied"
error creating build container: The following failures happened while trying to pull image specified by "parity/parity:v2.5.13-stable" based on search registries in /etc/containers/registries.conf:
* "localhost/parity/parity:v2.5.13-stable": Error initializing source docker://localhost/parity/parity:v2.5.13-stable: pinging docker registry returned: Get https://localhost/v2/: dial tcp [::1]:443: connect: connection refused
* "docker.io/parity/parity:v2.5.13-stable": Error committing the finished image: error adding layer with blob "sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83": ApplyLayer exit status 1 stdout: stderr: permission denied
...
I am thinking of 2 possible causes:
First the container is build and then it is stored inside the kubernetes pod before transfering it to the container registry. Since the Pod does not have any persistent storage, it fails writting, hence this error.
The second is that The container is build and pushed to the container registry, for some reasons it has no permissions to it and fails.
Which one is it? And how do I fix it?
If it is the fist reason, do I need to add persistent volume rights to the serviceaccount running the pod?

gitlab runner needs root privileges, add this line into [runner.kuberentes] in gitlab configuration
privileged = true

Related

Why do I get "error while applying layer: operation not permitted" with buildah in a gitlab runner pod?

Hey I have a Gitlab runner(kubernetes executor) that is building containers inside a pipeline.
The pipeline runs inside a pod with the image: quay.io/buildah/stable and fails while calling buildah bud .:
STEP 1/12: FROM docker.io/parity/parity:v2.5.13-stable
Trying to pull docker.io/parity/parity:v2.5.13-stable...
Getting image source signatures
Copying blob sha256:4aae9d2bd9a7a79a688ccf753f0fa9bed5ae66ab16041380e595a077e1772b25
Copying blob sha256:72d77d7d5e84353d77d8a8f97d250120afe3650b85010137961560bce3a327d5
Copying blob sha256:49ac0bbe6c8eeb959337b336ceaa5c3bbbae81e316025f9b94ede453540f2377
Copying blob sha256:d1983a67e104e801fceb1850a375a71fe6b62636ba7a8403d9644f308a6a43f9
Copying blob sha256:1a0f3a523f04f61db942018321ae122f90d8e3303e243b005e8de9817daf7028
Copying blob sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83
Copying blob sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83
...
Copying blob sha256:92c90097dde63c8b1a68710dc31fb8b9256388ee291d487299221dae16070c4a
time="2022-03-03T16:01:36Z" level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: stderr: operation not permitted"
error creating build container: writing blob: adding layer with blob "sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83": ApplyLayer exit status 1 stdout: stderr: operation not permitted
I tried recreating this in a new pod: kubectl run -it buildah --image containers/buildah --command tail -f /dev/null with a simpler Dockerfile:
FROM ubuntu
RUN touch /test
CMD ["echo", "hello"]
and it worked. Building with the actual project outside the pod also works.
I couldn't figure out how to mount the project into the pod, so I haven't built the whole project in there yet.
So why doesn't it run in the gitlab runner? Can this maybe be a misconfiguration in the gitlab runner?

When trying to connect jenkins and kubernetes, Jenkins job throws the following error

Started by user admin.
Running as SYSTEM.
Building in workspace /var/lib/jenkins/workspace/myjob
[myjob] $ /bin/sh -xe /tmp/jenkins8491647919256685444.sh
+ sudo kubectl get pods
error: the server doesn't have a resource type "pods"
Build step 'Execute shell' marked build as failure
Finished: FAILURE
It looks to me that the authentication credentials were not set correctly. Please copy the kubeconfig file /etc/kubernetes/admin.conf to ~/.kube/config? Also check that the KUBECONFIG variable is set.
It would also help to increase the verbose level using the flag --v=99.
Please take a look: kubernetes-configuration.

ImagePullBackOff Error

I'm using Minikube on Windows based machine. On the same machine, I also have docker-machine setup.
I've pointed docker client towards minikube’s docker environment. This way, can see Docker environment inside Kubernetes.
Without issues, I can build docker images & run docker containers from Minikube VM. However, when I try to start any docker container via kubectl(from PowerShell), its failing to start primarily as if kubectl can't find docker image due to following error -
Failed to pull image "image name": rpc error: code = Unknown desc =
Error response from daemon: repository "image-repo-name" not found:
does not exist or no pull access Error syncing pod
I don't know what's missing. If "docker run" can access the image why "kubectl" can not do?
Here my Dockerfile:
FROM node:4.4
EXPOSE 9002
COPY server.js .
CMD node server.js
Make sure your image path in your yaml is correct. That image should exist on your local machine. It should be named with a number not "latest"
Have this in your deployment yaml:
image: redis:1.0.48
run "> docker images" to see the list of images on your machine.

connect to shell terminal of other container in a pod

When I define multiple containers in a pod/pod template like one container running agent and another php-fpm, how can they access each other? I need the agent container to connect to php-fpm by shell and need to execute few steps interactively through agent container.
Based on my understanding, we can package kubectl into the agent container and use kubectl exec -it <container id> sh to connect to the container. But I don't want Agent container to have more privilege than to connect to the target container with is php-fpm.
Is there a better way for agent container to connect to php-fpm by a shell and execute commands interactively?
Also, I wasn't successful in running kubectl from a container when using minikube due to following errors
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
Error in configuration:
* unable to read client-cert /Users/user/.minikube/apiserver.crt for minikube due to open /Users/user/.minikube/apiserver.crt: no such file or directory
* unable to read client-key /Users/user/.minikube/apiserver.key for minikube due to open /Users/user/.minikube/apiserver.key: no such file or directory
* unable to read certificate-authority /Users/user/.minikube/ca.crt for minikube due to open /Users/user/.minikube/ca.crt: no such file or directory
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
First off, every Pod within a k8s cluster has its own k8s credentials provided by /var/run/secrets/kubernetes.io/serviceaccount/token, and thus there is absolutely no need to attempt to volume mount your home directory into a docker container
The reason you are getting the error about client-cert is because the contents of ~/.kube are merely strings that point to the externally defined ssl key, ssl certificate, and ssl CA certificate defined inside ~/.kube/config -- but I won't speak to fixing that problem further since there is no good reason to be using that approach

Error running pod with image from gcr.io

I've successfully pushed my Docker container image to gcr.io with the following command:
$ gcloud docker push gcr.io/project-id-123456/my-image
But when I try to create a new pod I get the following error:
$ kubectl run my-image --image=gcr.io/project-id-123456/my-image
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-image my-image gcr.io/project-id-123456/my-image run=my-image 1
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-image-of9x7 0/1 Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar exit status 1 unexpected EOF 0 5m
It doesn't pull on my local as well:
$ docker rmi -f $(docker images -q) # Clear local image cache
$ gcloud docker pull gcr.io/project-id-123456/my-image:latest
…
Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar re-exec error: exit status 1: output: unexpected EOF
Can someone please suggest me how to fix this?
Ok, after digging around in the Docker code base, I think I have found some similar reports of what you are seeing.
The way this error is displayed changed in 1.7, but this thread seems related:
https://github.com/docker/docker/issues/14792
This turned me onto this fix, which landed in 1.8:
https://github.com/docker/docker/pull/15040
In particular, see this comment:
https://github.com/docker/docker/pull/15040#issuecomment-125661037
The comment seems to indicate that this is only a problem for v1 layers, so our Beta support for v2 may work around this issue.
You can push to our v2 beta via:
gcloud docker --server=beta.gcr.io push beta.gcr.io/project-id-123456/...
You can then simply change the reference in your Pod to "beta.gcr.io/..." and it will pull via v2.