I am deploying a k8s cluster locally using Kind. The image gets deployed ok and when I view the list of services I see the following
the service I'm trying to access is chatt-service and if you notice the EXTERNAL-IP is pending. I know minikube has a command which makes this accessible, but how do I do it on a Kind cluster ?
for Loadbalancer service type you will not able to get public ip because you're running it locally and you will need to run it in a cloud provider which will provide the LB for you like ALB in aws or LoadBalancer in Digital ocean. however, you can access this service locally using the Kubectl proxy tool.
.
kubectl port-forward service/chatt-service 3002:3002
There are some additional options to work on LoadBalancer under Kind cluster. (While the port forwarding is the simplest way).
https://kind.sigs.k8s.io/docs/user/loadbalancer/
First way:
You can also expose pods and services using extra port mappings
this mean manually set ports in cluster-config.yaml
And maybe second way (but not actually the solution on LoadBalancer):
You may want to check out the Ingress Guide as a cross-platform
workaround
Related
I have two deployments (A and B), each one exposing ClusterIP Service. Before deploying Istio, I was able to communicate from pod A to any of B pods via its Service (e.g. http://B.default.svc.cluster.local/dosomecrazystuff)
After deploying Istio (1.0.5), I getting "http://B.default.svc.cluster.local refusing connection" when calling it from pod in deployment A.
What is default routing policy in Istio? I don't need some cleaver load balancing or version based routing, just straightforward communication from A to B (the same way as I would do that without Istio).
What the absolute minimal required configuration to make it work?
Well, it seems like some local issue I having on my MicroK8s deployment. On EKS and another MicroK8s I able to communicate as desired without anything special.
So, the answer is: no special configuration required to make it work, it supposed to be able to communicate just as is.
I have a kubernetes cluster installed.
I want to use an external database ( out of my cluster ) for one of my microservices but this external db is set as a cluster and does not have it's own load balancer.
is there a way to create an internal loadbalancer service that will allow kubernetes to always direct the microservices to the running instance?
I saw that you can set a service of type loadbalancer,how can I use it?
I tried creating it but I saw the loadbalancer service was created with NodePort. can it be used without a NodePort?
many thanks
You cannot, as far as I'm aware, have a Kubernetes service healthcheck an external service and provide loadbalancing to it.
The Service of type=LoadBalancer refers to cloudproviders' LoadBalancers, like ELB for AWS. This automates the process of adding NodePort services to those cloud provider LoadBalancers.
You may be able to get the solution you require using a service mesh like Istio but that's a relatively complex setup.
I would recommend putting something like HAProxy/KeepAliveD of IPVS in front of your database
How to install Kubernetes dashboard on external IP address?
Is there any tutorial for this?
You can expose services and pods in several ways:
expose the internal ClusterIP service through Ingress, if you have that set up.
change the service type to use 'type: LoadBalancer', which will try to create an external load balancer.
If you have external IP addresses on your kubernetes nodes, you can also expose the ports directly on the node hosts; however, I would avoid these unless it's a small, test cluster.
change the service type to 'type: NodePort', which will utilize a port above 30000 on all cluster machines.
expose the pod directly using 'type: HostPort' in the pod spec.
Depending on your cluster type (Kops-created, GKE, EKS, AKS and so on), different variants may not be setup. Hosted clusters typically support and recommend LoadBalancers, which they charge for, but may or may not have support for NodePort/HostPort.
Another, more important note is that you must ensure you protect the dashboard. Running an unprotected dashboard is a sure way of getting your cluster compromised; this recently happened to Tesla. A decent writeup on various way to protect yourself was written by Jo Beda of Heptio
Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.
I have set up a Kubernetes cluster. The cluster contains, among other things, a cluster and deployment surfacing an API webservice (based on the subway-explorer-gmaps-proxy container).
I've deployed the service externally, using the LoadBalancer service type (this is on GCP):
$kubectl get svc subway-explorer-gmaps-proxy-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
subway-explorer-gmaps-proxy-service LoadBalancer 10.35.252.232 35.224.78.225 9000:31396/TCP 19h
My understanding (and correct me if I'm wrong!) is that this service should now be queryable outside of the cluster, by visiting http://35.224.78.225 in the browser.
When running the Docker container locally, I can verify things are working correctly by navigating to the following URL:
http://localhost:49161/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
Looking at the kubectl get output, I expect visiting the following URL in the browser will serve me the content I'm looking for:
http://35.224.78.225:31396/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
But when I visit this URL, nothing gets served.
I suspect there is a non-fatal error in the deployment configuration. What is an effective way of debugging this effective way of debugging this problem? Are there access logs or a stdout stream somewhere I can check to see what's wrong?
You can try running through the official docs on debugging services: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Beyond that, have you confirmed you're querying the load balancer on the right port? While I don't deploy on GCP, when launching a load balancer for a kubernetes service on AWS it'll accept traffic on port 80/443 and forward it to the NodePort of the service, which I'm guessing is 31396 for your case. What are the ports listed in kubectl get svc subway-explorer-gmaps-proxy-service -o yaml?
What I didn't realize is that Google Cloud has a separate firewall system, which is distinct from the connection settings managed by Kubernetes. In order to expose the application to the outside world (e.g. a web browser, for example), I need to also modify the Google Cloud Firewall rules (see for example this answer as to how).
To test that the application is working on the Kubernetes side, you need not modify cloud firewall rules. Instead, run wget, curl, or some similar data retrieval command from a different pod on the cluster, pointed at the internal IP address and port number of the pod of interest.
For example. The "hello world" pod used by the Kubernetes documentation is the busybox pod (defined here). By creating this pod in my cluster, and then running the following:
kubectl exec busybox -c busybox -- wget "10.35.249.23:9000"
I was able to confirm that the service is functioning correctly within Kubernetes. You can also use any other pod which defines a wget in the underlying OS, I just used busybox because all of my other pods use Google's Container Optimized OS, which doesn't include it.
Finally, for the purposes of debugging, I went ahead and added a /status endpoint to my API application service which serves {"status": "OK"} when the core service is working. I recommend following this pattern with other applications as well, as it gives a simple endpoint that you can test to make sure that, at a minimum, the webserver is responding to input. In my case, I discovered that the /status page is OK, but the API calls are failing, which allows me to narrow the issue down to unresolved Promises caused by a bad credentials secret.