wiki.js exec user process caused: exec format error on postgress container - postgresql

I'm trying to deploy a wiki.js into my K3S cluster of four RPi4.
For this, I run this commands according to the install instructions (https://docs.requarks.io/install/kubernetes):
$ helm repo add requarks https://charts.js.wiki
$ helm repo update
$ helm install wikijs requarks/wiki
After those commands, I get the following:
NAME: wikijs
LAST DEPLOYED: Tue Jun 14 13:25:30 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
http://wiki.minikube.localmap[path:/ pathType:Prefix]
However, when I get the pods, I get the following:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
wikijs-7f6c8b9f54-lz55k 0/1 ContainerCreating 0 3s
wikijs-postgresql-0 0/1 Error 0 3s
Finally, viewing the postgres logs, I get:
$ kubectl logs wikijs-postgresql-0
standard_init_linux.go:228: exec user process caused: exec format error
I believe this is an error about an executable running in the wrong architecture but, both, wikijs and postgresql support ARM64 so, by deploying the app, the right architecture should be selected, shouldn't it?
If I need to select the architecture manually, how can I do so? I've viewed the chart for wikijs and I can't find the place to select the postgres image.
Many thanks!

I was running into the same issue. The issue is running the postgres image on your rpi. I was able to get this to work on my rpi4 using this image for my postgresql statefulset: arm64v8/postgres:14 from docker.io.
I had to change this image in two places within the helm chart:
# charts/postgresql/values.yaml
image:
registry: docker.io
repository: arm64v8/postgres
tag: 14
volumePermissions:
enabled: true
image:
registry: docker.io
repository: arm64v8/postgres
tag: 14
The latter is for the initContainer (see statefulset template within the postgresql chart).

Related

Deploying zookeeper from Bitnami Helm chart

We are using Zookeeper Bitnami chart for creating a vmware carvel package, firstly I tried deploying helm chart, but pod was pending, I got the below output. But how does I install zkCli.sh
I want to test the ouput but i am unable to understand How to install zkCli.sh
$ helm install zookeeper bitnami/zookeeper
NAME: zookeeper
LAST DEPLOYED: Wed Apr 20 13:33:03 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 9.0.3
APP VERSION: 3.8.0
** Please be patient while the chart is being deployed **
ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
zookeeper.default.svc.cluster.local
To connect to your ZooKeeper server run the following commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- **zkCli.sh**
To connect to your ZooKeeper server from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/release-name-zookeeper-0 2181: &
**zkCli.sh** 127.0.0.1:2181

Kubernetes Alpha Debug command

I'm trying the debug CLI feature on 1.18 release of Kubernetes but I have an issue while I have executed debug command.
I have created a pod show as below.
kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
After than when running this command: kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
Kubernetes is hanging like that :
Defaulting debug container name to debugger-aaaa.
How Can I resolve that issue?
It seems that you need to enable the feature gate on all control plane components as well as on the kubelets. If the feature is enabled partially (for instance, only kube-apiserver and kube-scheduler), the resources will be created in the cluster, but no containers will be created, thus there will be nothing to attach to.
In addition to the answer posted by Konstl
To enable EphemeralContainers featureGate correctly, add on the master nodes to:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
the following line to container command:
spec:
containers:
- command:
- kube-apiserver # or kube-controller-manager/kube-scheduler
- --feature-gates=EphemeralContainers=true # < -- add this line
Pods will restart immediately.
For enabling the featureGate for kubelet add on all nodes to:
/var/lib/kubelet/config.yaml
the following lines at the bottom:
featureGates:
EphemeralContainers: true
Save the file and run the following command:
$ systemctl restart kubelet
This was enough in my case to be able to use kubectl alpha debug as it's explained in the documentation
Additional usefull pages:
Ephemeral Containers
Share Process Namespace between Containers in a Pod
I'm seeing similar behaviour. Although i'm running 1.17 k8s with 1.18 kubectl. But I was under the impression that the feature was added in 1.16 in k8s and in 1.18 in kubectl.

Pulling local repository docker image from kubernetes

Trying to install a sample container App using Pod in my local environment
I'm using kubernates cluster coming with docker desktop.
I'm creating the Pod using bellow command with the YML file
kubectl create -f test_image_pull.yml
apiVersion: v1
kind: Pod
metadata:
# value must be lower case
name: sample-python-web-app
spec:
containers:
- name: sample-hello-world
image: local/sample:latest
imagePullPolicy: Always
command: ["echo", "SUCCESS"]
docker file used to build the image and this container running without any issue if u run with docker run
# Use official runtime python
FROM python:2.7-slim
# set work directory to app
WORKDIR /app
# Copy current directory
COPY . /app
# install needed packages
RUN pip install --trusted-host pypi.python.org -r requirement.txt
# Make port 80 available to outside container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python" , "app.py"]
from flask import Flask
from redis import Redis, RedisError
import os
import socket
#connect to redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format (
name=os.getenv("NAME", "world"),
hostname=socket.gethostname(),
visits=visits
)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
Flask
Redis
Once I describe the pod it shows me below error
kubectl describe pod sample-python-web-app
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m25s default-scheduler Successfully assigned default/sample-python-web-app to docker-desktop
Normal Pulling 97s (x4 over 3m22s) kubelet, docker-desktop Pulling image "local/sample:latest"
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Failed to pull image "local/sample:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for local/sample, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Error: ErrImagePull
Normal BackOff 78s (x6 over 3m16s) kubelet, docker-desktop Back-off pulling image "local/sample:latest"
Warning Failed 66s (x7 over 3m16s) kubelet, docker-desktop Error: ImagePullBackOff
Kubernetes pulls container images from a Docker Registry. Per the doc:
You create your Docker image and push it to a registry before
referring to it in a Kubernetes pod.
Moreover:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags.
So, the way the image is referenced in the pod's spec - "image: local/sample:latest" - Kubernetes looks on Docker Hub for the image in repository named "local".
You can push the image to Docker Hub or some other external Docker Registry, public or private; you can host Docker Registry on the Kubernetes cluster; or, you can run a Docker Registry locally, in a container.
To run a Docker registry locally:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Next, find what is the IP address of the host - below I'll use 10.0.2.1 as an example.
Then, assuming the image name is "local/sample:latest", tag the image:
docker tag local/sample:latest 10.0.2.1:5000/local/sample:latest
...and push the image to the local registry:
docker push 10.0.2.1:5000/local/sample:latest
Next, change in pod's configuration YAML how the image is referenced - from
image: local/sample:latest
to
image: 10.0.2.1:5000/local/sample:latest
Restart the pod.
EDIT: Most likely the local Docker daemon will have to be configured to treat the local Docker registry as insecure. One way to configure that is described here - just replace "myregistrydomain.com" with the host's IP (e.g. 10.0.2.1). Docker Desktop also allows to edit daemon's configuration file through the GUI.
If you want to setup local repository for Kubernetes cluster, you might follow this guide .
I would recommend using Trow.io which is a image Management for Kubernetes to quickly create a registry that runs wihtin Kubernetes and provides a secure and fast way to get containers running on the cluster.
We're building an image management solution for Kubernetes (and possibly other orchestrators). At its heart is the Trow Registry, which runs inside the cluster, is simple to set-up and fully integrated with Kubernetes, including support for auditing and RBAC.
Why "Trow"
"Trow" is a word with multiple, divergent meanings. In Shetland folklore a trow is a small, mischievous creature, similar to the Scandanavian troll. In England, it is a old style of cargo boat that transported goods on rivers. Finally, it is an archaic word meaning "to think, believe, or trust". The reader is free to choose which interpretation they like most, but it should be pronounced to rhyme with "brow".
Whole installation process is described here.

Why is a kubernetes pod with a simple hello-world image getting a CrashLoopBackOff message

pod.yml
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
labels:
zone: prod
version: v1
spec:
containers:
- name: hello-ctr
image: hello-world:latest
ports:
- containerPort: 8080
kubectl create -f pod.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-pod 0/1 CrashLoopBackOff 5 5m
Why CrashLoopBackOff?
In this case the expected behavior is correct. The hello-world container is meant to print some messages and then exit after completion. So this is why you are getting CrashLoopBackOff -
Kubernetes runs a pod - the container inside runs the expected commands and then exits.
Suddenly there is nothing running underneath - so the pod is ran again -> same thing happens and the number of restarts grows.
You can see that inkubectl describe pod where Terminated state is visible and the Reason for it is status Completed. If you would choose a container image which does not exit after completion the pod would be in running state.
The hello-world is actually exiting meaning Kubernetes thinks its crashing, and keeps restarting and exiting and going in CrashLoppBackOff. When you docker run your hello-world container you get this:
$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
$
So, it's a one-time thing and not a service. Kubernetes has Jobs or CronJobs for these types of workloads.

Minikube can't pull image from local registry

I ran eval $(minikube docker-env) then built a docker container. When I run docker images on my host I can see the image. When I run minikube ssh then docker images I can see it.
When I try to run it, the pod fails to launch. kubectl describe pod gives:
14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal Pulling pulling image "personalisation-customer:latest"
14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Warning Failed Failed to pull image "personalisation-customer:latest": rpc error: code = 2 desc = Error: image library/personalisation-customer:latest not found
14m 2s 66 kubelet, minikube Warning FailedSync Error syncing pod
14m 2s 59 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal BackOff Back-off pulling image "personalisation-customer:latest"
My imagePullPolicy is Always.
What could be causing this? Other pods are working locally.
You aren't exactly pulling from your local registry, you are using your previously downloaded images or your locally builded, since you are specifying imagePullPolicy: Always this will always try to pull it from the registry.
Your image doesn't contain a specific docker registry personalisation-customer:latest for what docker will understand index.docker.io/personalisation-customer:latest and this is an image that doesn't exist in the public docker registry.
So you have 2 options imagePullPolicy: IfNotPresent or to upload the image to some registry.
The local Docker cache isn't a registry. Kubernetes tries to download the image from Dockerhub (the default registry), since you set iMagePullPolicy to Always. Set it to Never, so Kubernetes uses to local image.
I had the same issue. Issue was not mentioning the image version. I used
kubectl run testapp1 --image=<image> --image-pull-policy=IfNotPresent
instead of,
kubectl run testapp1 --image=<image>:<version> --image-pull-policy=IfNotPresent
For minikube, make the changes as follows:
Eval the docker env using:
eval $(minikube docker-env)
Build the docker image:
docket build -t my-image
Set the image name as only "my-image" in pod specification in your yaml file.
Set the imagePullPolicy to Never in you yaml file. Here is the example:
apiVersion:
kind:
metadata:
spec:
template:
metadata:
labels:
app: my-image
spec:
containers:
- name: my-image
image: "my-image"
imagePullPolicy: Never