Kubernetes deleted pod IP address is refere by rabbitmq which is still accepting messages - kubernetes

I am running RabbitMQ on kubernetes as service, and I have multiple microservices which get register and consumes message from rabbitmq.
Services are running in Multi Node cluster
I have install RabbitMQ using Bitnami helm chart 8.6.1 with single replica
I have 2 replicas for each service (Consumers) so there must be atmost 2 consumer for that event on rabbitmq.
After deploying new container (consumer) on K8s old container (Consumer) which already got terminated still visible on RabbitMQ connection and that cosumer is still accepting messages, and because of that old pod code gets executed (That old container was registered at 2nd April 2021).
I have verified that new deployment has all updated images with new hash.
I have checked all IP addess on K8S pod but that unwated IP is not visible.
Does anyone knows how can I trace that old pod IP on k8s, because that pod is no longer visible in k8s get pods.
Thanks

Related

Why deployment with istio sidecar having connectivity issues outside cluster running on openshift?

I am having a deployment with istio-proxy sidecar running in an openshift cluster.
Its connects to a mysql database in an external VM outside cluster.
When I am not using the sidecar it connects the DB in the first try.
Whereas it cannot connect to the DB when the deployment running with istio-sidecar in the first go.
It connects to the DB after 2-3 retries and resulting in POD restarts 2-3 times.
Where i may go wrong.If i may get any help or insight.

While deploying Kafka on on premises k8s the status of pod is pending for long time

I am trying to use helm charts for deploying kafka and zookeeper in local k8s but while checking the status of respective pods it shows PENDING for long time and pod is not assigning to any node nevertheless i have 2 worker nodes running which are healthy
I tried by deleting the pods and redeployed still i landed in same situation not able to make pods run need help on how i can run this pods

Node networking issue with Openshift

I am running my services on an open shift cluster with all the nodes in ready status.
I found few microservice pods are having networking issues on selected nodes but they are up and running.
But when they are running on other nodes they are fine.
Also what can be the reason behind the pod is showing stickiness even after the restart pod is deployed on the same node again and again also there is no toleration-taint scenerio.

Deploying a stateless Go app with Redis on Kubernetes

I had deploy a stateless Go web app with Redis on Kubernetes. Redis pod is running fine but the main issue with application pod and getting error dial tcp: i/o timeout in log. Thank you!!
Please take look: aks-vm-timeout.
Make sure that the default network security group isn't modified and that both port 22 and 9000 are open for connection to the API server. Check whether the tunnelfront pod is running in the kube-system namespace using the kubectl get pods --namespace kube-system command.
If it isn't, force deletion of the pod and it will restart.
Also make sure if Redis port is open.
More info about troubleshooting: dial-backend-troubleshooting.
EDIT:
Answering on your question about tunnelfront:
tunnelfront is an AKS system component that's installed on every cluster that helps to facilitate secure communication from your hosted Kubernetes control plane and your nodes. It's needed for certain operations like kubectl exec, and will be redeployed to your cluster on version upgrades.
Speaking about VM:
I would SSH into the it and start watching the disk IO latency using bpf / bcc tools and the docker / kubelet logs.

Tow replicas awx on k8s

I deployed one replica of awx using the instalation role in repo of the the project. then i scales the statfulset of AWX to 2 replicas. but when i tail the the logs of the pods for example awx-0 and awx-1 they are both recieving the same request at the same time the web service doesn't load balance requests.