SpringBootAdmin with Kubernetes AutoDiscovery Running on GCP in GKE.
Client has Servlet Context Path setup and it is not being shown as active application in the SpringBootAdmin.
What should we do to get it to work.
Thanks,
SpringBootAdmin Kubernetes AutoDiscovery show all Actuator endpoints of SpringBootCLient with Servlet.context-path setup
Related
Pilot uses ADS APIs to configure the Envoy proxy for each service or the configs are first pushed to the istio-agent and then mapped to the proxy.
I am unable to understand the role of istio-agent while configuring the proxies. The documentation does not provide sufficient information.
According to the documentation, the istio-agent run inside a pod and is used to requests certificates from citadel using an open gRPC connection. However, nothing is stated about proxy configurations and service discovery.
I replaced Eureka service with Spring Cloud Kubernetes Discovery to run in kubernetes cluster (microk8s) and it's work fine in k8s without eurika. But how can i use Spring Cloud Kubernetes Discovery for local debug? For example, when i'm starting my microservices local without kubernetes, how can I resolve them by name? Is't necessary to use any local discovery service like Eurika in that case? or is there some other way?
simple way can be to create a network of services via docker-compose file and run docker containers for the applications those need to be communicate with and the main services those you need to debug can be opened in the VSCode like editors.
The service discovery can happen by help of docker-compose and eureka or spring-cloud won't be required.
I have a Kubernetes cluster deployed locally with Kubeadm/Vagrant with a master and two workers with the following IPs:
master: 192.168.250.10
worker1: 192.168.250.11
worker2: 192.168.250.12
then I have an application composed by a ReactJS frontend and SpringBoot backend running in two separate containers on the same Pod. When I submit a form on the frontend the application calls an API in the backend that internally calls a Kubernetes API. To authenticate to the cluster I use a .kube/config file correctly configured.
When the application (frontend/backend) is outside the cluster everything works fine. I use docker-compose to startup the two containers just for the unit tests. The .kube/config file has as API URL https://192.168.250.10:6443. The problem is when I try to run the application in the containers the IP 192.168.250.10 doesn't work properly and communication goes in Timeout exception.
I am sure the application is OK because the same application works fine in IBM Cloud wherein .kube/config there is an API server with public IP reachable.
My question is, which IP should I put into .kube/config when I run the application locally inside my cluster? How can I get this IP using kubectl commands?
Thanks in advance for any help.
I have minikube running kubernetes inside a virtual box.
one of the docker container it runs is an ignite server.
during my development I try to access the ignite server from outside java client but the discovery fails with all configurations I tried.
is it possible at all?
If yes can someone give an example?
To enable Apache Ignite nodes auto-discovery in Kubernetes, you need to enable TcpDiscoveryKubernetesIpFinder in IgniteConfiguration. Read more about this on https://apacheignite.readme.io/docs/kubernetes-deployment. Your Kubernetes service definitions should have the container exposed port specified, then minikube should give you service URL after successful deployment.
I have two kubernetes controllers and services with pods running named web and api respectively.
In my web pod I am using superagent to try and access an api pod with the following http://api:3000/api/user this results in the error ERR_NAME_NOT_RESOLVED
However if I run a shell on my web pod and curl http://api:3000/api/user everything works as it should
Am I missing something fundamental about how superagent works? Or something else?
If you launch your superagent in a browser, the browser is not a part of Kubernetes cluster, hence it neither uses kube DNS nor can it access cluster IPs.
To make it work you need to expose your api service to the external world by means of NodePort/Loadbalancer service or Ingress