kubernetes Back restarting container issue, Rest-api pod - kubernetes

i am a beginner, i am trying to create an hyperledger based IoT environment. only issue i am facing is that my rest-api pod gets into crashloopbackoff state, i have checked the logs, it shows the following error.
enter image description here

Your container image is broken. Nothing to do with k8s.
it seems your container is using a node.js version that it does not support.
try docker run <your_image>

Related

Unable to enter a pod in the gke cluster

We have our k8s cluster set up with our app, including a neo4j DB deployment and other artifacts. Overnight, we've started facing an issue in our GKE cluster when trying to enter or interact somehow with any pod running in the cluster. The following screenshot shows a sample of the error we get.
issued command
error: unable to upgrade connection: Authorization error (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
Our GKE cluster is created as standard (no autopilot) and the versions are
Node pool details
cluster basics
As said before it was working fine regardless of the warning about the versions. However, we haven't been able yet to identify what could have changed between the last time it worked, and now.
Any clue on what authorization setup might have been changed making it incompatible now is very welcomed

Failing to run Mattermost locally on a Kubernetes cluster using Minikube

Summary in one sentence
I want to deploy Mattermost locally on a Kubernetes cluster using Minikube
Steps to reproduce
I used this tutorial and the Github documentation:
https://mattermost.com/blog/how-to-get-started-with-mattermost-on-kubernetes-in-just-a-few-minutes/
https://github.com/mattermost/mattermost-operator/tree/v1.15.0
To start minikube: minikube start --kubernetes-version=v1.21.5
To start ingress; minikube addons enable ingress
I cloned the Github repo with tag v1.15.0 (second link)
In the Github documentation (second link) they state that you need to install Custom Resources by running: kubectl apply -f ./config/crd/bases
Afterwards I installed MinIO and MySQL operators by running: make mysql-minio-operators
Started the Mattermost-operator locally by running: go run .
In the end I deployed Mattermost (I followed step 2, 7 and 9 from the first link)
Observed behavior
Unfortunately I keep getting the following error in the mattermost-operator:
INFO[1419] [opr.controllers.Mattermost] Reconciling Mattermost Request.Name=mm-demo Request.Namespace=mattermost
INFO[1419] [opr.controllers.Mattermost] Updating resource Reconcile=fileStore Request.Name=mm-demo Request.Namespace=mattermost kind="&TypeMeta{Kind:,APIVersion:,}" name=mm-demo-minio namespace=mattermost patch="{\"status\":{\"availableReplicas\":0}}"
INFO[1419] [opr.controllers.Mattermost.health-check] mattermost pod not ready: pod mm-demo-ccbd46b9c-9nq8k is in state 'Pending' Request.Name=mm-demo Request.Namespace=mattermost
INFO[1419] [opr.controllers.Mattermost.health-check] mattermost pod not ready: pod mm-demo-ccbd46b9c-tp567 is in state 'Pending' Request.Name=mm-demo Request.Namespace=mattermost
ERRO[1419] [opr.controllers.Mattermost] Error checking Mattermost health Request.Name=mm-demo Request.Namespace=mattermost error="found 0 updated replicas, but wanted 2"
By using k9s I can see that mm-demo won't start. See below for photo.
Another variation of deployment
Also tried another variation by following all the steps from the first link (without the licences secret step). At this point the mattermost-operator is visible using k9s and won't getting any errors. But unfortunately the mm-demo pod keeps crashing (empty logs, so seeing no errors or something).
Anybody an idea?
As #Ashish faced the same issue, he fixed it by upgrading the resources.
Minikube will be able to run all the pods by running minikube start --kubernetes-version=v1.21.5 --memory 4000 --cpus 4

Taking snapshot of a running container via kubectl

Is it possible to take an image or a snapshot of container running inside pod using kubectl?
Via docker, it is possible to use the docker commit command that creates an image of a container from which we can spawn more containers. I wanted to understand if there was something similar that we could do with kubectl.
No, partially because that's not in the kubernetes mental model of anything one would wish to do to a cluster, and partially because docker is not the only container runtime kubernetes uses. Every runtime one could use underneath kubernetes would need to support that operation, and I doubt they do.
You are welcome to do your own docker commit either by getting a shell on the Node, or by running a privileged Pod then connecting to the docker.sock via a volumeMount and running it that way

Heapster status stuck in Container Creating or Pending status

I am new to Kubernetes and started working with it from past one month.
When creating the setup of cluster, sometimes I see that Heapster will be stuck in Container Creating or Pending status. After this happens the only way have found here is to re-install everything from the scratch which has solved our problem. Later if I run the Heapster it would run without any problem. But I think this is not the optimal solution every time. So please help out in solving the same issue when it occurs again.
Heapster image is pulled from the github for our use. Right now the cluster is running fine, So could not send the screenshot of the heapster failing with it's status by staying in Container creating or Pending status.
Suggest any alternative for the problem to be solved if it occurs again.
Thanks in advance for your time.
A pod stuck in pending state can mean more than one thing. Next time it happens you should do 'kubectl get pods' and then 'kubectl describe pod '. However, since it works sometimes the most likely cause is that the cluster doesn't have enough resources on any of its nodes to schedule the pod. If the cluster is low on remaining resources you should get an indication of this by 'kubectl top nodes' and by 'kubectl describe nodes'. (Or with gke, if you are on google cloud, you often get a low resource warning in the web UI console.)
(Or if in Azure then be wary of https://github.com/Azure/ACS/issues/29 )

Is possible to deploy local docker image on kubernetes?

I was trying to deploy my local docker image on kubernetes, but doesn't work for me.
I loaded image into docker and tagged it as app:v1, then I ran image by use kubectl this way kubectl run app --image=app:v1 --port=8080.
If I want to lookup my pods I see error "Failed to pull image "app:v1": rpc error: code = 2 desc = Error: image library/app not found".
What am I doing wrong?
In normal case your Kubernetes cluster runs on a different machine than your docker build was run on, hence it has no access to your local image (unless you are using minikube and you eval minikubes environment to actually run your docker commands against docker daemon powering the minikube install).
To get it working you need to push the image to a registry available to kubernetes cluster.
By running your command you actually tell kubernetes to pull app:v1 from official docherhub hosted images.