How to See the Application logs in Pod - kubernetes

We are moving towards Microservice and using K8S for cluster orchestration. We are building infra using Dynatrace and Prometheus server for metrics collection but they are yet NOT in good shape.
Our Java Application on one of the Pod is not working. I want to see the application logs.
How do I access these logs?

Assuming the application logs to stdout/err, kubectl logs -n namespacename podname.

Related

kubernetes logs for service or deployment

I am having real trouble understanding how I am suppose to debug my current situation. I have followed the setup instructions from https://docs.substra.org/en/stable/contributing/getting-started.html#
There is a backend service which was created as a ClusterIP, and therefore can not be accessed from the host.
I created a load balancer for this purpose. using the command
kubectl expose deployment deployment_name --port=8000 --target-port=8000 \
--name=lb_service --type=LoadBalancer
However, the attempt to access the backend service failed when I use the LoadBalancer Ingress ip and NodePort port with a connection timeout. I like to see the relevant logs to check where the problem occurred. However, apparently kubectl logs service only shows logs for pods, whereas the load balancer, according to the kubectl expose command is attached to the deployment. Therefore, I am not able to see any logs related either to the load balancer service, or the deployment component.
When I looked at the pod which is supposed to be hosting the deployment, the log showed no error.
Can someone point out where do I look for logs that can debug this failed connectivity?
You probably need to look at the ingress logs, se this page from the documentation: https://kubernetes.github.io/ingress-nginx/troubleshooting/.
it is true that you can only get logs from pods. However, that is sufficient to see the relevant error messages.

Rabbitmq Cluster configuration in Google Kubernetes Engine

I have installed krew and installed rabbitmq-plugin using the same. Using the kubectl rabbitmq -n create instance --image=custom-image:v1 command created a rabbitmq stateful set in my google kubernetes engine cluster.
The deployment was successful, but now when I try to update the stateful set with new image custom-image:v2, it is not getting rolled out.
Can someone help me here ?
Thanks & Regards,
Robin
Normally if you check the statefulSet events you will get a hint on what's going on wrongly. Usually, if the previous version was running, v2 is not reachable or can't be deployed.
kubectl describe statefulset <statefulSet-name> -n <namespace>

Send Kubernetes pod's logs to Splunk

I am using Amazon EKS and I have a server (consider it as X ) which is connected to the control node using kubectl.
I am able to get the pod logs from the server X by running the following command.
kubectl logs -f podname -n=namespace
Now my goal is to send these pod logs to Splunk for which I am using splunk-connect-for-kubernetes
But as per the configurations of values.yaml file, kubernetes logs are forwarded to the Splunk instead of the pod logs.
I would specifically like to send the pod logs i.e. my application logs to the Splunk. Is there any way to achieve this?
One of the option you have is to make use of fluentd, fluentbit combination to read and send to splunk.

Kubernates cluster instance

I have created a Kubernetes cluster and one of instance in the cluster is inactive
I want to review the configured Kubernetes Engine cluster of an inactive configuration by which command should I check?
Should I use this "kubectl config get-contexts"?
or
kubectl config use-context and kubectl config view?
Am beginner to cloud please anyone explains?
The kubectl config get-context will not help you debug why the instance is failing. Basically it will just show you the list ot contexts. A context is a group of cluster access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl . On other hand the kubectl config view will just print you kubeconfig settings.
The best way to start is the Kubernestes official documentation. It provides a good basic steps for troubleshoouting your cluster. Some of the steps can be applied to GKE as well as the Kubeadm or Minikube clusters.
If you're using GKE, then you can read the nodes logs from Stackdriver. This document is excellent start when you want to check the logs directly in the log viewer.
If one of your instaces report NotReady after listing them with kubectl get nodes I suggest to ssh to that instances and check kubernetes components (kubelet and kube-proxy). You can view the GKE nodes from the instances page.
Kube Proxy logs:
/var/log/kube-proxy.log
If you want to check the kubelet logs, they're a unit in systemd in COS that can be accessed using jorunactl.
Kubelet logs:
sudo journalctl -u kubelet
For further debugging it is worth mentioning that that GKE master is a node inside a Google managed project and it is different from your cluster project.
For the detailed master logs you will have open a google support ticket. Here is more information about how GKE cluster architecture works, in case there's something related to the api-server.
Let me know if that was helpful.
You can run below command to check status of all the nodes of a kubernetes cluster. Pleases note if you are using GKE managed service you will not be able to see status of master nodes, you will only see status of worker nodes.
kubectl get nodes -o wide
kubectl describe node nodename
You can also run below command to check status of control plane components.
kubectl get componentstatus
You can use the below command to get list of all the nodes in GKE cluster:
kubectl get nodes -o wide
Once you have the list of nodes, you can describe the node to get the events"
kubectl describe node <Node-Name>
Based on the events you can debug the node.

Can't shut down influxDB in Kubernetes

I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been noted in this post that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue.
The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this:
kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1
kubectl delete service --namespace=kube-system monitoring-influxdb
kubectl delete service --namespace=kube-system monitoring-grafana
Then if I say:
kubectl get pods --namespace=kube-system
I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently.
You probably need to remove the manifest files for influxdb from the /etc/kubernetes/addons/ directory on your "master" host. Many of the kube-up.sh implementations use a service (usually at /etc/kubernetes/kube-master-addons.sh) that runs periodically and makes sure that all the manifests in /etc/kubernetes/addons/ are active.
You can also restart your cluster, but run export ENABLE_CLUSTER_MONITORING=none before running kube-up.sh. You can see other environment settings that impact the cluster kube-up.sh builds at cluster/aws/config-default.sh