how to connect zookeeper after deploying helm chart in gke? - apache-zookeeper

we are creating a vmware carvel package, I need do a sanity check for zookeeper, how can i check the output in gke?
Zookeeper output
Curl and localhost is failing to connect.

The problem is that your service type is ClusterIP which is not available from outside the Kubernetes cluster. There are two ways you could do this
If you need to really access this regularly from outside the cluster, you should deploy a service of type NodePort or LoadBalancer or an ingress. These would be reachable from outside
If you only want to check something quickly, you can temporarily make zookeeper visible with
kubectl port-forward zookeeper-0 2181

Related

services “kubernetes-dashboard” , can't access kubernetes ui

I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml

Deploying the Dashboard UI Error in Kubernetes [duplicate]

I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml

Difference between kubectl port-forwarding and NodePort service

Whats the difference between kubectl port-forwarding (which forwards port from local host to the pod in the cluster to gain access to cluster resources) and NodePort Service type ?
You are comparing two completely different things. You should compare ClusterIP, NodePort, LoadBalancer and Ingress.
The first and most important difference is that NodePort expose is persistent while by doing it using port-forwarding, you always have to run kubectl port-forward ... and kept it active.
kubectl port-forward is meant for testing, labs, troubleshooting and not for long term solutions. It will create a tunnel between your machine and kubernetes so this solution will serve demands from/to your machine.
NodePort can give you long term solution and it can serve demands from/to anywhere inside the network your nodes reside.
If you use port forwarding kubectl port forward svc/{your_service} -n {service_namespace} you just need a clusterIP, kubectl will handle the traffic for you. Kubectl will be the proxy for your traffic
If you use nodeport for access your service means you need to open port on the worker nodes.
when you use port forwarding, that is going to cause our cluster to essentially behave like it has a node port service running inside of it without creating a service. This is strictly for the development setting. with one command you will have a node port service.
// find the name of the pod that running nats streaming server
kubectl get pods
kubectl port-forward nats-Pod-5443532542c8-5mbw9 4222:4222
kubectl will set up proxy that will forward any trafic on your local machine to a port on that specific pod.
however, to create a node port you need to write a YAML config file to set up a service. It will expose the port permanently and performs load balancing.

Why can't access my gRPC REST service that is running in Minikube?

I've been learning Kubernetes recently and just came across this small issue. For some sanity checks, here is the functionality of my grpc app running locally:
> docker run -p 8080:8080 -it olamai/simulation:0.0.1
< omitted logs >
> curl localhost:8080/v1/todo/all
{"api":"v1","toDos":[{}]}
So it works! All I want to do now is deploy it in Minikube and expose the port so I can make calls to it. My end goal is to deploy it to a GKE or Azure cluster and make calls to it from there (again, just to learn and get the hang of everything.)
Here is the yaml I'm using to deploy to minikube
And this is what I run to deploy it on minikube
> kubectl create -f deployment.yaml
I then run this to get the url
> minikube service sim-service --url
http://192.168.99.100:30588
But this is what happens when I make a call to it
> curl http://192.168.99.100:30588/v1/todo/all
curl: (7) Failed to connect to 192.168.99.100 port 30588: Connection refused
What am I doing wrong here?
EDIT: I figured it out, and you should be able to see the update in the linked file. I had pull policy set to Never so it was out of date 🤦
I have a new question now... I'm now able to just create the deployment in minikube (no NodePort) and still make calls to the api... shouldn't the deployment need a NodePort service to expose ports?
I checked your yaml file and it works just fine. But only I realized that you put 2 types for your services, LoadBalancer and also NodePort which is not needed.
As if you check from this documentation definition of LoadBalancer, you will see
LoadBalancer: Exposes the service externally using a cloud provider’s
load balancer. NodePort and ClusterIP services, to which the external
load balancer will route, are automatically created.
As an answer for your next question, you probably put type: LoadBalancer to your deployment yaml file, that's why you are able to see NodePort anyway.
If you put type: ClusterIP to your yaml, then service will be exposed only within cluster, and you won't able to reach to your service outside of cluster.
From same documentation:
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this
value makes the service only reachable from within the cluster. This
is the default ServiceType

Access nodeport via kube-proxy from another machine

I have kubernetes cluster (node01-03).
There is a service with nodeport to access a pod (nodeport 31000).
The pod is running on node03.
I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?
NodePort is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:
each Node will proxy that port (the same port number on every Node) into your Service
So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.
However, if the service is accessed using NodeIP:NodePort from outside the cluster, we need to first make sure that NodeIP is reachable from where we are hitting NodeIP:NodePort.
If NodeIP:NodePort cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP rule on the FORWARD chain (which in turn is caused by Docker 1.13 for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:
iptables -A FORWARD -j ACCEPT
The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).
Three other options to enable external access to a service are:
ExternalIPs: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
LoadBalancer with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
If accessing pods within the Kubernetes cluster, you dont need to use the nodeport. Infer the Kubernetes service targetport instead. Say podA needs to access podB through service called serviceB. All you need assuming http is http://serviceB:targetPort/