Vitess guestbook example not working in minikube - kubernetes

I am following the instructions on how to setup vitess in kubernetes. I am using minikube 0.15 on my local machine (windows 10) running on virtualbox 5.1.12.
I have managed to get all the way to step 12 before I start seeing strange things happening.
When I run ./vtgate-up.sh everything starts fine, but the service stays in a pending state.
At first I didn't think anything of it until I went on to the next step of trying to install the guestbook client app.
After running ./guestbook-up.sh again everything went fine, no errors, but the service is again in a pending state, and I don't get an external endpoint.
I tried going on to the next step, but when I run kubectl get service guestbook I am suppose to get an expernal-ip, but I don't. The instructions say to wait a few minutes, but I have let this run for an hour and still nothing.
So here is where I am stuck. What do I do next?

It's normal that you can't get an external IP in this scenario since that gets created in response to the LoadBalancer service type, which does not work in Minikube.
For the vtgate service, it actually shouldn't matter since the client (the guestbook app) is inside Kubernetes and can use the cluster IP. For the guestbook, you could try to work around the lack of LoadBalancer support in Minikube to access the frontend from outside the cluster in a couple different ways:
Use kubectl port-forward to map a local port to a particular guestbook pod.
Or, change the guestbook service type to NodePort and access that port on your VM's IP address.

Related

Kubernetes image update breaks pods and have to kill deployments

So my setup on kubernetes is basically an external nginx load balancer that sends traffic for virtual hosts across the nodes.
The network runs all docker containers, 10 instances of a front end pod which is a compiled angular app, 10 instances of a pod that has two containers, a “built” image of a symfony app with a phpfpm container dedicated for each pod, an external mysql server on the local network which runs a basic docker container, 10 cdn pods which simply run an nginx server to pick up static content requests, 10 pods that run a socket chat application via nginx, a dedicated network of open vidu servers, and it also runs php fpm pods for multiple cron jobs.
All works fine and dandy until I say update the front end image and do an update to the cluster. The pods all update no problem but I end up with a strange issue of pages not loading, or partially loading, or somehow a request for the backend pods ending up not loading the http request or somehow loading from frontend pods. It’s really quite random.
The only way to get it back up again is to destroy every deployment and fire them up again and I’ve no idea what is causing it. Once it’s all restarted it all works again.
Just looking for idea on what it could be , anyone experienced this behaviour before?

kind cluster how to access a service using loadbalancer

I am deploying a k8s cluster locally using Kind. The image gets deployed ok and when I view the list of services I see the following
the service I'm trying to access is chatt-service and if you notice the EXTERNAL-IP is pending. I know minikube has a command which makes this accessible, but how do I do it on a Kind cluster ?
for Loadbalancer service type you will not able to get public ip because you're running it locally and you will need to run it in a cloud provider which will provide the LB for you like ALB in aws or LoadBalancer in Digital ocean. however, you can access this service locally using the Kubectl proxy tool.
.
kubectl port-forward service/chatt-service 3002:3002
There are some additional options to work on LoadBalancer under Kind cluster. (While the port forwarding is the simplest way).
https://kind.sigs.k8s.io/docs/user/loadbalancer/
First way:
You can also expose pods and services using extra port mappings
this mean manually set ports in cluster-config.yaml
And maybe second way (but not actually the solution on LoadBalancer):
You may want to check out the Ingress Guide as a cross-platform
workaround

EKS internal service connection unreliable

I just setup a new EKS cluster (latest version available, using three default AMI).
I deployed a Redis instance in it as a Kubernetes service and exposed it. I can access the Redis database through internal DNS like : mydatabase.redis (it's deployed in the redis namespace). In another pod I can connect to my Redis database, however sometimes the connection takes more than 10 seconds.
It's doesn't seem to be a DNS resolution issue as host mydatabase.redis responds immediately with the service IP address. However when I try to connect to it, for example: nc mydatabase.redis 6379 -v it sometimes connects instantly and sometimes takes more than 10 seconds.
All my services are impacted, I don't know why. I didn't change any settings in my cluster this a basic EKS cluster.
How can I debug this?

How to debug a Kubernetes service endpoint that isn't serving correctly?

I have set up a Kubernetes cluster. The cluster contains, among other things, a cluster and deployment surfacing an API webservice (based on the subway-explorer-gmaps-proxy container).
I've deployed the service externally, using the LoadBalancer service type (this is on GCP):
$kubectl get svc subway-explorer-gmaps-proxy-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
subway-explorer-gmaps-proxy-service LoadBalancer 10.35.252.232 35.224.78.225 9000:31396/TCP 19h
My understanding (and correct me if I'm wrong!) is that this service should now be queryable outside of the cluster, by visiting http://35.224.78.225 in the browser.
When running the Docker container locally, I can verify things are working correctly by navigating to the following URL:
http://localhost:49161/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
Looking at the kubectl get output, I expect visiting the following URL in the browser will serve me the content I'm looking for:
http://35.224.78.225:31396/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
But when I visit this URL, nothing gets served.
I suspect there is a non-fatal error in the deployment configuration. What is an effective way of debugging this effective way of debugging this problem? Are there access logs or a stdout stream somewhere I can check to see what's wrong?
You can try running through the official docs on debugging services: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Beyond that, have you confirmed you're querying the load balancer on the right port? While I don't deploy on GCP, when launching a load balancer for a kubernetes service on AWS it'll accept traffic on port 80/443 and forward it to the NodePort of the service, which I'm guessing is 31396 for your case. What are the ports listed in kubectl get svc subway-explorer-gmaps-proxy-service -o yaml?
What I didn't realize is that Google Cloud has a separate firewall system, which is distinct from the connection settings managed by Kubernetes. In order to expose the application to the outside world (e.g. a web browser, for example), I need to also modify the Google Cloud Firewall rules (see for example this answer as to how).
To test that the application is working on the Kubernetes side, you need not modify cloud firewall rules. Instead, run wget, curl, or some similar data retrieval command from a different pod on the cluster, pointed at the internal IP address and port number of the pod of interest.
For example. The "hello world" pod used by the Kubernetes documentation is the busybox pod (defined here). By creating this pod in my cluster, and then running the following:
kubectl exec busybox -c busybox -- wget "10.35.249.23:9000"
I was able to confirm that the service is functioning correctly within Kubernetes. You can also use any other pod which defines a wget in the underlying OS, I just used busybox because all of my other pods use Google's Container Optimized OS, which doesn't include it.
Finally, for the purposes of debugging, I went ahead and added a /status endpoint to my API application service which serves {"status": "OK"} when the core service is working. I recommend following this pattern with other applications as well, as it gives a simple endpoint that you can test to make sure that, at a minimum, the webserver is responding to input. In my case, I discovered that the /status page is OK, but the API calls are failing, which allows me to narrow the issue down to unresolved Promises caused by a bad credentials secret.

Kubernetes - Unable to hit the server

We deployed a containerized app by pulling a public docker image from docker hub and were able to get a pod running at a server running at 172.30.105.44. Hitting this IP from a rest client or curl/pinging the IP gives no response. Can someone please guide us where we are going wrong?
Firstly, find out the IP of your node by executing the command
kubectl get nodes
Get the information related to the pod running by executing the command kubectl describe services <pod-name>
Make a note of the field NodePort from here.
To access your service that is already running, hit the endpoint - nodeIP:NodePort.
You can now access your service successfully!
I am not sure where you have deployed (AWS, GKE, Bare) but you should make sure you have the following:
https://kubernetes.io/docs/user-guide/ingress/
https://kubernetes.io/docs/user-guide/services/
Ingress will work out of the box on GKE, but with an AWS installation, you may need to make sure you have nginx-ingress pods running.