kubectl does not output the logs - kubernetes

I print all of my Pods with:
$ kubectl get pods --all-namespaces
and the output is:
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-7487d7f956-hx4fp 1/1 Running 0 88m
calico-system calico-node-vn52p 1/1 Running 0 88m
calico-system calico-typha-7588984c44-m6tsz 1/1 Running 0 88m
gitlab-managed-apps install-ingress 0/1 Error 0 14m********
gitlab-managed-apps install-prometheus 0/1 Error 0 12m
kube-system coredns-f9fd979d6-2n2pg 1/1 Running 0 91m
kube-system coredns-f9fd979d6-sq9bl 1/1 Running 0 91m
kube-system etcd-tuoputuo-iamnotstone-server 1/1 Running 0 91m
kube-system kube-apiserver-tuoputuo-iamnotstone-server 1/1 Running 0 91m
kube-system kube-controller-manager-tuoputuo-iamnotstone-server 1/1 Running 0 91m
kube-system kube-proxy-87jkr 1/1 Running 0 91m
kube-system kube-scheduler-tuoputuo-iamnotstone-server 1/1 Running 0 91m
tigera-operator tigera-operator-58f56c4958-4x9tp 1/1 Running 0 89m
But when I execute the logs command:
$ kubectl logs -f install-ingress
I see this error
Error from server (NotFound): pods "install-ingress" not found

The install-ingress pod is in gitlab-managed-apps namespace. If you do not specify namespace in the kubectl command then it will search for the pod in default namespace where the install-ingress pod is not present.
Could you try below command (specifying the namespace of the pod).
kubectl logs -f install-ingress -n gitlab-managed-apps

Related

Debezium with kafka connetor for the mssql connector class in kubernetes

I facing this error
status:
conditions:
- lastTransitionTime: "2022-07-04T14:09:43.687887Z"
message: 'GET /connectors/debezium-connector-mssql/topics returned 404 (Not Found):
Unexpected status code'
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 3
tasksMax: 1
topics: []
this is my pods status
:~/.ssh$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kafka my-cluster-entity-operator-77bcc4b67f-8qwbp 3/3 Running 0 3d19h
kafka my-cluster-kafka-0 1/1 Running 0 3d19h
kafka my-cluster-zookeeper-0 1/1 Running 0 3d19h
kafka my-connect-cluster-connect-f4b6ccc55-s5cxz 1/1 Running 0 3d19h
kafka strimzi-cluster-operator-86864b86d5-rq9pp 1/1 Running 0 3d19h
kube-system coredns-6d4b75cb6d-9gjxx 1/1 Running 0 3d19h
kube-system etcd-minikube 1/1 Running 0 3d19h
kube-system kube-apiserver-minikube 1/1 Running 0 3d19h
kube-system kube-controller-manager-minikube 1/1 Running 0 3d19h
kube-system kube-proxy-d6gl6 1/1 Running 0 3d19h
kube-system kube-scheduler-minikube 1/1 Running 0 3d19h
kube-system storage-provisioner 1/1 Running 1 (3d19h ago) 3d19h
And I logged in as a interactive terminal from the pods
$ kubectl exec -it my-cluster-kafka-0 bash -n kafka
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[kafka#my-cluster-kafka-0 kafka]$ curl my-connect-cluster-connect-api:8083/connectors/debezium-connector-mssql/status
{"name":"debezium-connector-mssql","connector":{"state":"RUNNING","worker_id":"172.17.0.7:8083"},
"tasks":[{"id":0,"state":"FAILED","worker_id":"172.17.0.7:8083","trace":"org.apache.kafka.connect.errors.ConnectException: Error configuring an instance of SqlServerConnectorTask; check the logs for details\n\tat io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:91)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"}],"type":"source"}
Facing error with configuring an instance of SqlServerConnectorTask

Calico etcd has no key named calico

I have a 2 node kubernetes cluster with calico networking. All the pods are up and running.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-94466 1/1 Running 0 21h
kube-system calico-kube-controllers-5fdcfdbdf7-xsjxb 1/1 Running 0 14d
kube-system calico-node-hmnf5 2/2 Running 0 14d
kube-system calico-node-vmmmk 2/2 Running 0 14d
kube-system coredns-78fcdf6894-dlqg6 1/1 Running 0 14d
kube-system coredns-78fcdf6894-zwrd6 1/1 Running 0 14d
kube-system etcd-kube-master-01 1/1 Running 0 14d
kube-system kube-apiserver-kube-master-01 1/1 Running 0 14d
kube-system kube-controller-manager-kube-master-01 1/1 Running 0 14d
kube-system kube-proxy-nxfht 1/1 Running 0 14d
kube-system kube-proxy-qnn45 1/1 Running 0 14d
kube-system kube-scheduler-kube-master-01 1/1 Running 0 14d
I wanted to query calico-etcd using etcdctl, but I get the following error.
# etcdctl --debug --endpoints "http://10.142.137.11:6666" get calico
start to sync cluster using endpoints(http://10.142.137.11:6666)
cURL Command: curl -X GET http://10.142.137.11:6666/v2/members
got endpoints(http://10.142.137.11:6666) after sync
Cluster-Endpoints: http://10.142.137.11:6666
cURL Command: curl -X GET http://10.142.137.11:6666/v2/keys/calico?quorum=false&recursive=false&sorted=false
Error: 100: Key not found (/calico) [4]
Any pointers on why I get this error?
As #JakubBujny mentioned, ETCDCTL_API=3 should be set to get the appropriate result.

Kubernetes dashboard (web UI) not working

I have just started a new Kubernetes 1.8.0 environment using minikube (0.27) on Windows 10.
I followed this steps but it didn't work:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
When I list pods this is the result:
C:\WINDOWS\system32>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 23m
kube-system heapster-69b5d4974d-s9vrf 1/1 Running 0 5m
kube-system kube-addon-manager-minikube 1/1 Running 0 23m
kube-system kube-apiserver-minikube 1/1 Running 0 23m
kube-system kube-controller-manager-minikube 1/1 Running 0 23m
kube-system kube-dns-545bc4bfd4-xkt7l 3/3 Running 3 1h
kube-system kube-proxy-7jnk6 1/1 Running 0 23m
kube-system kube-scheduler-minikube 1/1 Running 0 23m
kube-system kubernetes-dashboard-5569448c6d-8zqnc 1/1 Running 2 52m
kube-system kubernetes-dashboard-869db7f6b4-ddlmq 0/1 CrashLoopBackOff 19 51m
kube-system monitoring-influxdb-78d4c6f5b6-b66m9 1/1 Running 0 4m
kube-system storage-provisioner 1/1 Running 2 1h
As you can see, I have 2 kubernets-dashboard pods now, one of then is running and the other one is CrashLookBackOff.
When I try to run minikube dashboard this is the result:
"Waiting, endpoint for service is not ready yet..."
I have tried to remove kubernetes-dashboard-869db7f6b4-ddlmq pod:
kubectl delete pod kubernetes-dashboard-869db7f6b4-ddlmq
This is the result:
"Error from server (NotFound): pods "kubernetes-dashboard-869db7f6b4-ddlmq" not found"
"Error from server (NotFound): pods "kubernetes-dashboard-869db7f6b4-ddlmq" not found"
You failed to delete the pod due to the lack of namespace (add -n kube-system). And it should be 1 dashboard pod if no modification's applied. If it still fails to run minikube dashboard after you delete the abnormal pod, more logs should be provided.

kubectl cannot exec or logs pod on other node

v1.8.2,installed by kubeadm
2 node:
NAME STATUS ROLES AGE VERSION
192-168-99-102.node Ready <none> 8h v1.8.2
192-168-99-108.master Ready master 8h v1.8.2
run nginx to test:
NAME READY STATUS RESTARTS AGE IP NODE
curl-6896d87888-smvjm 1/1 Running 0 7h 10.244.1.99 192-168-99-102.node
nginx-fbb985966-5jbxd 1/1 Running 0 7h 10.244.1.94 192-168-99-102.node
nginx-fbb985966-8vp9g 1/1 Running 0 8h 10.244.1.93 192-168-99-102.node
nginx-fbb985966-9bqzh 1/1 Running 1 7h 10.244.0.85 192-168-99-108.master
nginx-fbb985966-fd22h 1/1 Running 1 7h 10.244.0.83 192-168-99-108.master
nginx-fbb985966-lmgmf 1/1 Running 0 7h 10.244.1.98 192-168-99-102.node
nginx-fbb985966-lr2rh 1/1 Running 0 7h 10.244.1.96 192-168-99-102.node
nginx-fbb985966-pm2p7 1/1 Running 0 7h 10.244.1.97 192-168-99-102.node
nginx-fbb985966-t6d8b 1/1 Running 0 7h 10.244.1.95 192-168-99-102.node
kubectl exec pod on master is OK!
but when i exec pod on other node,return a error:
kubectl exec -it nginx-fbb985966-8vp9g bash
error: unable to upgrade connection: pod does not exist

kubectl logs not working after creating cluster with kubeadm

I followed the guide on "Using kubeadm to Create a Cluster" but I am not able to view logs using kubectl:
root#o1:~# kubectl logs -n kube-system etcd-o1
Error from server: Get https://149.156.11.4:10250/containerLogs/kube-system/etcd-o1/etcd: tls: first record does not look like a TLS handshake
The above IP address is the cloud frontend address not the address of the VM which probably causes the problem. Some other kubectl cmds seem to work:
root#o1:~# kubectl cluster-info
Kubernetes master is running at https://10.6.16.88:6443
KubeDNS is running at https://10.6.16.88:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root#o1:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-o1 1/1 Running 0 3h
kube-system kube-apiserver-o1 1/1 Running 0 3h
kube-system kube-controller-manager-o1 1/1 Running 0 3h
kube-system kube-dns-545bc4bfd4-mhbfb 3/3 Running 0 3h
kube-system kube-flannel-ds-lw87h 2/2 Running 0 1h
kube-system kube-flannel-ds-rkqxg 2/2 Running 2 1h
kube-system kube-proxy-hnhfs 1/1 Running 0 3h
kube-system kube-proxy-qql4r 1/1 Running 0 1h
kube-system kube-scheduler-o1 1/1 Running 0 3h
Please help.
Maybe change the address in the $HOME/admin.conf.