"Error from server (NotFound): deployments.apps "wordpress" not found" I am getting this error although I've deployed it? - kubernetes

I'm trying to deploy the pod that I've already created as a service but I keep getting the aforementioned error.
The first error is because I had already deployed the pods the other day. But the second error is the main problem.
It would be great if anyone could help me out.

kubectl run ...
is used to create and run a particular image in a pod. [reference]
kubectl expose ...
is used to expose a resource (pod, service, replicationcontroller, deployment, replicaset) as a new k8s service. [reference]
What you are doing is create a pod with kubectl run and expose a deployment with kubectl expose deployment. Those are two different resources. That's why you are getting NotFound error - because specified deployment does not exist.
What you can do is either
kubectl expose pod ...
or create a deployment.

Related

kubernetes logs for service or deployment

I am having real trouble understanding how I am suppose to debug my current situation. I have followed the setup instructions from https://docs.substra.org/en/stable/contributing/getting-started.html#
There is a backend service which was created as a ClusterIP, and therefore can not be accessed from the host.
I created a load balancer for this purpose. using the command
kubectl expose deployment deployment_name --port=8000 --target-port=8000 \
--name=lb_service --type=LoadBalancer
However, the attempt to access the backend service failed when I use the LoadBalancer Ingress ip and NodePort port with a connection timeout. I like to see the relevant logs to check where the problem occurred. However, apparently kubectl logs service only shows logs for pods, whereas the load balancer, according to the kubectl expose command is attached to the deployment. Therefore, I am not able to see any logs related either to the load balancer service, or the deployment component.
When I looked at the pod which is supposed to be hosting the deployment, the log showed no error.
Can someone point out where do I look for logs that can debug this failed connectivity?
You probably need to look at the ingress logs, se this page from the documentation: https://kubernetes.github.io/ingress-nginx/troubleshooting/.
it is true that you can only get logs from pods. However, that is sufficient to see the relevant error messages.

How to solve "Ingress Error: Some backend services are in UNHEALTHY state"?

I am working on deploying a certain pod to GKE but I am having an unhealthy state for my backend services.
The deployment went through via helm install process but the ingress reports a certain warning error that says Some backend services are in UNHEALTHY state. I have tried to access the logs but do not know exactly what to look out for. Also, I already have liveness and readiness probes running.
What could I do to make the ingress come back to a healthy state? Thanks
Picture of warning error on GKE UI
Without more details it is hard to determine the exact cause.
As first point I want to mention, that your error message is Some backend services are in UNHEALTHY state, not All backend services are in UNHEALTHY state. It indicates that only a few of your backends are affected.
There might be tons of reasons, if you are using GCP Ingress or Nginx Ingress, your configuration of externalTrafficPolicy, if you are using preemptive nodes, your livenessProbe and readinessProbe, health checks, etc.
In your scenario, only a few backends are affected, the only thing with current information I can suggest you some debug options.
Using $ kubectl get po -n <namespace> check if all your pods are working correctly, that all containers within pods are Ready and pod status is Running. Eventually check logs of suspicious pod $ kubectl logs <podname> -c <containerName>. In general you should check all pods the load balancer is pointing to,
Confirm if livenessProbe and readinessProbe are configured properly and response is 200,
Describe your ingress $ kubectl describe ingress <yourIngressName> and check backends,
Check if you've configured your health checks properly according to GKE Ingress for HTTP(S) Load Balancing - Health Checks guide.
If you still won't be able to solve this issue with above debug options, please provide more details about your env with logs (without private information).
Useful links:
kubernetes unhealthy ingress backend
GKE Ingress shows unhealthy backend services
In GKE you can define BackendConfig. To define custom health checks. you can configure this using the below link to make the ingress backend to be in a HEALTHY state.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health
If you have kubectl access to your pods, you can run kubectl get pod, and then kubctl logs -f <pod-name>.
Review the logs and find the error(s).

Airflow is receiving incorrect POD status from Kubernetes

We are using Airflow to schedule Spark job on Kubernetes. Recently, I have encountered a scenario where:
airflow received error 404 with message "pods pod-name not found"
I manually checked that POD was actually working fine at that time. In fact, I was able to collect logs using kubectl logs -f -n namespace podname
What happened due to this is that airflow created another POD for running the same job which resulted in race condition.
Airflow is using Kubernetes Python client's read_namespaced_pod API()
def read_pod(self, pod):
"""Read POD information"""
try:
return self._client.read_namespaced_pod(pod.metadata.name, pod.metadata.namespace)
except BaseHTTPError as e:
raise AirflowException(
'There was an error reading the kubernetes API: {}'.format(e)
)
I believe read_namespaced_pod() calls Kubernetes API. In order to investigate this further, I would like to like check logs of Kubernetes API server.
Can you please share steps to check what is happening on Kubernetes side ?
Note: Kubernetes version is 1.18 and Airflow version is 1.10.10.
Answering the question from the perspective of logs/troubleshooting:
I believe read_namespaced_pod() calls Kubernetes API. In order to investigate this further, I would like to like check logs of Kubernetes API server.
Yes, you are correct, this function calls the Kubernetes API. You can check the logs of Kubernetes API server by running:
$ kubectl logs -n kube-system KUBERNETES_API_SERVER_POD_NAME
I would also consider checking the kube-controller-manager:
$ kubectl logs -n kube-system KUBERNETES_CONTROLLER_MANAGER_POD_NAME
The example output of it:
I0413 12:33:12.840270 1 event.go:291] "Event occurred" object="default/nginx-6799fc88d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6799fc88d8-kchp7"
A side note!
Above commands will work assuming that your kubernetes-apiserver and kubernetes-controller-manager Pod is visible to you
Can you please share steps to check what is happening on Kubernetes side ?
This question targets the basics of troubleshooting/logs checking.
For that you can use following commands (and the ones mentioned earlier):
$ kubectl get RESOURCE RESOURCE_NAME:
example: $ kubectl get pod airflow-pod-name
also you can add -o yaml for more information
$ kubectl describe RESOURCE RESOURCE_NAME:
example: $ kubectl describe pod airflow-pod-name
$ kubectl logs POD_NAME:
example: $ kubectl logs airflow-pod-name
Additional resources:
Kubernetes.io: Docs: Concepts: Cluster administration: Logging Architecture
Kubernetes.io: Docs: Tasks: Debug application cluster: Debug cluster

Update deployment fails when same name exists in separate namespaces

I've used the following command to update the image run in a deployment:
kubectl --cluster websites --namespace production set image
deployment/mobile-web mobile-web=eu.gcr.io/websites/mobile-web:0.23
This worked well until I created a staging namespace mirroring the production environment. In other words the deployment mobile-web exists both in the production and staging namespace. Now I get the error:
Error from server: the server could not find the requested resource
(get deployments.extensions mobile-web)
What am I missing here? Or is the only way to update using a yaml- or JSON-file, which means a bit more work on the CI/CD pipeline? I've tried setting the namespace with:
kubectl config set-context production --namespace=production --cluster=websites
but to no avail.
The solution for my concern was to kill the current proxy and get new credentials and start the proxy again:
gcloud container clusters get-credentials websites
kubectl proxy --port=8080
Now either commands work as expected:
kubectl get deployment mobile-web --namespace=production
kubectl get deployment mobile-web --namespace=staging
However it doesn't explain why it stopped working in the first place.

Error while creating pods in Kubernetes

I have installed Kubernetes in Ubuntu server using instructions here. I am trying to create pods using kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080 as listed in the example. However, when I do kubectl get pod I get the status of the container as pending. I further did kubectl describe pod for debugging and I see the message:
FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts.
I am further trying to delete this pod by kubectl delete pod hello-minikube-3383150820-1r4f7 but when I further do kubectl get pod I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.
The PodFitsHostPorts predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running kubectl describe svc.
kubectl run creates a deployment object (you can see it with kubectl describe deployments) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run kubectl delete deployments hello-minikube.