I can't get into the azure kubernetes pod - kubernetes

I want to check inside the pod the flaw I'm having inside the image but the errors below appear. As I verified in describe, the name of the container is correct. What else can I do to get the connection in the cluster?
Command: kubectl exec -it -c airflow-console -n airflow airflow-console-xxxxxxx-xxxxx bash
error: unable to upgrade connection: container not found ("airflow-console")
Command: kubectl describe pod/airflow-console-xxxxxxx-xxxxx -n airflow
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 37m default-scheduler Successfully assigned airflow/airflow-console-xxxxxxx-xxxxx to aks-test
Normal Pulling 37m kubelet, aks-test Pulling image "test.azurecr.io/airflow:2"
Normal Pulled 37m kubelet, aks-test Successfully pulled image "test.azurecr.io/airflow:2"
Warning BackOff 36m kubelet, aks-test Back-off restarting failed container
Normal Pulled 36m (x3 over 37m) kubelet, aks-test Container image "k8s.gcr.io/git-sync:v3.1.2" already present on machine
Normal Created 36m (x3 over 37m) kubelet, aks-test Created container git-sync
Normal Started 36m (x3 over 37m) kubelet, aks-test Started container git-sync
Normal Created 36m (x3 over 36m) kubelet, aks-test Created container airflow-console
Normal Pulled 36m (x2 over 36m) kubelet, aks-test Container image "test.azurecr.io/airflow:2" already present on machine
Normal Started 36m (x3 over 36m) kubelet, aks-test Started container airflow-console
Warning BackOff 2m15s (x178 over 36m) kubelet, aks-test Back-off restarting failed container

this line
Warning BackOff 2m15s (x178 over 36m) kubelet, aks-test Back-off restarting failed container
shows that your pod/container is in a failed state. This will prevent you from execution commands in the container due to it not being alive.
To learn why your pod/container is in a bad state, you should look at the logs of the failed container
kubectl logs -n airflow airflow-console-xxxxxxx-xxxxx -c airflow-console
or the logs off the previous container that failed. (sometimes it helps)
kubectl logs -n airflow airflow-console-xxxxxxx-xxxxx -c airflow-console -p
This explain the main reason why a user cannot exec into a container.

Related

GKE Deploy issue - Free Tier with credit - Workloads

I am trying to deploy on a minimal cluster and failing
How can I tweak the configuration to make the availability green?
My Input:
My application is a spring- angular (please suggest an easy way where I can deploy both)
My docker-compose created 2 containers. I pushed them to registry (tagged)
When deploying in Workload, I added 1 container after another, and clicked Deploy. The result error is above
Is there a file I need to create - a kind of yml or yaml etc?
kubectl get pods
> NAME READY STATUS RESTARTS AGE
> nginx-1-d...7-2s6hb 0/2 CrashLoopBackOff 18 25m
> nginx-1-6..d7-7645w 0/2 CrashLoopBackOff 18 25m
> nginx-1-6...7-9qgjx 0/2 CrashLoopBackOff 18 25m
Events from describe
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/nginx-1-5d...56xp4 to gke-cluster-huge-default-pool-b6..60-4rj5
Normal Pulling 17m kubelet Pulling image "eu.gcr.io/p..my/py...my_appserver#sha256:479bf3e12ee2b410d730...579b940adc8845be74956f5"
Normal Pulled 17m kubelet Successfully pulled image "eu.gcr.io/py..my/py...emy_appserver#sha256:479bf3e12ee2b4..8b99a178ee05e8579b940adc8845be74956f5" in 11.742649177s
Normal Created 15m (x5 over 17m) kubelet Created container p..my-appserver-sha256-1
Normal Started 15m (x5 over 17m) kubelet Started container p..emy-appserver-sha256-1
Normal Pulled 15m (x4 over 17m) kubelet Container image "eu.gcr.io/py...my/pya...my_appserver#sha256:479bf3e12ee2b41..e05e8579b940adc8845be74956f5" already present on machine
Warning BackOff 2m42s (x64 over 17m) kubelet Back-off restarting failed container

Back-off pulling image metrics in kubernetes

i installed the k8s dashboard follows as github dashboard address.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
and output of
kubectl get pods --all-namespaces
kubernetes-dashboard dashboard-metrics-scraper-79c5968bdc-64wl9 0/1 ImagePullBackOff 0 48m
kubernetes-dashboard kubernetes-dashboard-9f9799597-w9cp9 1/1 Running 0 48m
and output of kubectl describe pod is
Normal Scheduled 41m default-scheduler Successfully assigned kubernetes-dashboard/dashboard-metrics-scraper-79c5968bdc-64wl9 to lab-vm
Normal Pulling 30m (x4 over 41m) kubelet Pulling image "kubernetesui/metrics-scraper:v1.0.6"
Warning Failed 27m (x7 over 37m) kubelet Error: ImagePullBackOff
Warning Failed 24m (x5 over 37m) kubelet Failed to pull image "kubernetesui/metrics-scraper:v1.0.6": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kubernetesui/metrics-scraper:v1.0.6": failed to resolve reference "docker.io/kubernetesui/metrics-scraper:v1.0.6": unexpected status code [manifests v1.0.6]: 408 Request Time-out
Warning Failed 10m (x7 over 37m) kubelet Error: ErrImagePull
Normal BackOff 81s (x73 over 37m) kubelet Back-off pulling image "kubernetesui/metrics-scraper:v1.0.6"
Please help me to fix this issue.
thanks a lot.

Configuring the Crunchy Data PostgreSQL operator in a Digital Ocean managed kubernetes cluster

I am having trouble configuring the Crunchy Data PostgreSQL operator in my Digital Ocean managed kubernetes cluster. Per their official installation/troubleshooting guide, I changed the default storage classes in the provided manifest file to do-block-storage and I've tried toggling the disable_fsgroup value, all to no avail. I'm getting the following output from running kubectl describe... on the operator pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned pgo/postgres-operator-697fd6dbb6-n764r to test-dev-pool-35jcv
Normal Started 69s kubelet, test-dev-pool-35jcv Started container event
Normal Created 69s kubelet, test-dev-pool-35jcv Created container event
Normal Pulled 69s kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/pgo-event:centos7-4.5.0" already present on machine
Normal Started 68s (x2 over 69s) kubelet, test-dev-pool-35jcv Started container scheduler
Normal Created 68s (x2 over 69s) kubelet, test-dev-pool-35jcv Created container scheduler
Normal Pulled 68s (x2 over 69s) kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/pgo-scheduler:centos7-4.5.0" already present on machine
Normal Started 64s (x2 over 69s) kubelet, test-dev-pool-35jcv Started container operator
Normal Created 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Created container operator
Normal Pulled 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/postgres-operator:centos7-4.5.0" already present on machine
Normal Started 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Started container apiserver
Normal Created 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Created container apiserver
Normal Pulled 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/pgo-apiserver:centos7-4.5.0" already present on machine
Warning BackOff 63s (x4 over 67s) kubelet, test-dev-pool-35jcv Back-off restarting failed container
Any ideas?
Edit: Solved! I was specifying the default storage incorrectly. The proper edits are
- name: BACKREST_STORAGE
value: "digitalocean"
- name: BACKUP_STORAGE
value: "digitalocean"
- name: PRIMARY_STORAGE
value: "digitalocean"
- name: REPLICA_STORAGE
value: "digitalocean"
- name: STORAGE5_NAME
value: "digitalocean"
- name: STORAGE5_ACCESS_MODE
value: "ReadWriteOnce"
- name: STORAGE5_SIZE
value: "1Gi"
- name: STORAGE5_TYPE
value: "dynamic"
- name: STORAGE5_CLASS
value: "do-block-storage"
See this GitHub issue for how to correctly format the file for DO.

How to fix ImagePullBackOff with Kubernetespod?

I created a pod 5 hours ago.Now I have error:Pull Back Off
These are events from describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4h51m default-scheduler Successfully assigned default/nodehelloworld.example.com to minikube
Normal Pulling 4h49m (x4 over 4h51m) kubelet, minikube pulling image "milenkom/docker-demo"
Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found
Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Error: ErrImagePull
Normal BackOff 4h49m (x6 over 4h51m) kubelet, minikube Back-off pulling image "milenkom/docker-demo"
Warning Failed 4h21m (x132 over 4h51m) kubelet, minikube Error: ImagePullBackOff
Warning FailedMount 5m13s kubelet, minikube MountVolume.SetUp failed for volume "default-token-zpl2j" : couldn't propagate object cache: timed out waiting for the condition
Normal Pulling 3m34s (x4 over 5m9s) kubelet, minikube pulling image "milenkom/docker-demo"
Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found
Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Error: ErrImagePull
Normal BackOff 3m5s (x6 over 5m1s) kubelet, minikube Back-off pulling image "milenkom/docker-demo"
Warning Failed 3m5s (x6 over 5m1s) kubelet, minikube Error: ImagePullBackOff
Images on my desktop
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
milenkom/docker-demo tagname 08d27ff00255 6 hours ago 659MB
Following advices from Max and Shanica I made a mess when tagging
docker tag 08d27ff00255 docker-demo:latest
Works OK,but when I try
docker push docker-demo:latest
The push refers to repository [docker.io/library/docker-demo]
e892b52719ff: Preparing
915b38bfb374: Preparing
3f1416a1e6b9: Preparing
e1da644611ce: Preparing
d79093d63949: Preparing
87cbe568afdd: Waiting
787c930753b4: Waiting
9f17712cba0b: Waiting
223c0d04a137: Waiting
fe4c16cbf7a4: Waiting
denied: requested access to the resource is denied
although I am logged in
Output docker inspect image 08d27ff00255
[
{
"Id": "sha256:08d27ff0025581727ef548437fce875d670f9e31b373f00c2a2477f8effb9816",
"RepoTags": [
"docker-demo:latest",
"milenkom/docker-demo:tagname"
],
Why does it fail assigning pod now?
manifest for milenkom/docker-demo:latest not found
Looks like there's no latest tag in the image you want to pull: https://hub.docker.com/r/milenkom/docker-demo/tags.
Try some existing image.
UPD (based on question update):
docker push milenkom/docker-demo:tagname
update k8s pod to point to milenkom/docker-demo:tagname

How to diagnose a "Error syncing pod" pod on Kuberentes?

I've got a deployment that has a pod that is stuck at :
The describe output has some sensitive details in it, but the events has this at the end:
...
Normal Pulled 18m (x3 over 21m) kubelet, ip-10-151-21-127.ec2.internal Successfully pulled image "example/asdf"
Warning FailedSync 7m (x53 over 19m) kubelet, ip-10-151-21-127.ec2.internal Error syncing pod
What is the cause of this error? How can I diagnose this further?
It seems to be repulling the image, however it's odd that it's x10 over 27m I wonder if it's maybe reaching a timeout?
Warning FailedSync 12m (x53 over 23m) kubelet, ip-10-151-21-127.ec2.internal Error syncing pod
Normal Pulling 2m (x10 over 27m) kubelet, ip-10-151-21-127.ec2.internal pulling image "aoeuoeauhtona.epgso"
The kubelet process is responsible for pulling images from a registry.
This is how you can check the kubelet logs:
$ journalctl -u kubelet
More information about images can be found in documentation.
You can check the logs of your pod:
kubectl logs pod-id
More information here: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/