I currently use a SOCKS proxy to access 10.64.140.43.nip.io from my laptop. My Kubernetes service is hosted on a server with static IP address 192.168.0.29 on the same network as my laptop. How do I access the Kubernetes service without the SOCKS proxy by directly going to 192.168.0.29?
Here's the output from kubectl get services | grep 10.64.140.43:
kubeflow istio-ingressgateway-workload LoadBalancer 10.152.183.47 10.64.140.43 80:32696/TCP,443:31536/TCP 24h
Try using a NodePort Service instead of a LoadBalancer, or use something like klipper-lb
Related
I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>
I have a deployment with services running inside minikube on a remote linux vps, these services have Cluster IP with no external IP, I would want to access these services from a web browser
Either you change the service type LoadBalancer which will create the External for you and you can access the specific service using that.
OR
Setup the ingress and ingress controller which can be also useful to expose the service directly.
OR for just Development use not for Prod
kubectl port-forward svc/<service name> 5555:<service port here>
This creates the proxy tunnel between your K8s cluster and local. You can access your service now at localhost:5555
I have RabbitMQ service running in an AKS (Azure Kubernetes Service) cluster as type LoadBalancer. While I am able to use the pod and service IP by providing http://<IP address>:<port number>/ to access RabbitMQ management page on VMs peered to the cluster's VNET, I am not able to access the page using http://<servicename>.<namespace>.svc.cluster.local URL with or without the ports appended. What could be done for this to work?
I discovered that the svc.cluster.local URLs resolve to the cluster IP and not external IP of the Load Balancer service. I figured this out after running a nslookup <URL> from one of the pods in the namespace. I am now evaluating the possibility of setting a static IP for the external IP or use the Azure application gateway.
As various google articles(Example : this blog) states that this(connecting Kube cluster through proxy and clusterIP) method isn’t suitable for a production environment, but it’s useful for development.
My question is why it is not suitable for production ? Why connecting through nodeport service is better than proxy and clusterIP ?
Lets distinguish between three scenarios where connecting to the cluster is required
Connecting to Kubernetes API Server
Connecting to the API server is required for administrative purposes. The users of your application have no business with it.
The following options are available
Connect directly to Master IP via HTTPS
Kubectl Proxy Use kubectl proxy to to make the Kubernetes API available on your localhost.
Connecting external traffic to your applications running in the Kubernetes Cluster. Here you want to expose your applications to your users. You'll need to configure a Service and they can be of the following types
NodePort: Only accessible on the NodeIPs and ports > 30000
ClusterIP: Internal Only. External traffic cannot hit a service of type ClusterIP directly. Requires ingress resource & ingress controller to receive external traffic.
LoadBalancer: Allows you receive external traffic to one and only one service
Ingress: This isn't a type of service, it is another type of Kubernetes resource. By configuring NGINX Ingress for example, you can handle traffic to multiple ClusterIP services with only on external LoadBalancer.
A Developer needs to troubleshoot a pod/service: kubectl port-forward: Port forwarding example Requires kubectl to be configured on the system hence it cannot be used for all users of the application
As you can see from the above explanation, the proxy and port-forwarding option aren't viable options for connecting external traffic to the applications running because it requires your kubectl installed and configured with a valid kubeconfig which grants access into your cluster.
I created a kubernetes service something like this on my 4 node cluster:
kubectl expose deployment distcc-deploy --name=distccsvc --port=8080
--target-port=3632 --type=LoadBalancer
The problem is how do I expose this service to an external ip. Without an external ip you can not ping or reach this service endpoint from outside network.
I am not sure if i need to change the kubedns or put some kind of changes.
Ideally I would like the service to be exposed on the host ip.
Like http://localhost:32876
hypothetically let's say
i have a 4 node vm on which i am running let's say nginx service. i expose it as a lodabalancer service. how can i access the nginx using this service from the vm ?
let's say the service name is nginxsvc is there a way i can do http://:8080. how will i get this here for my 4 node vm ?
LoadBalancer does different things depending on where you deployed kubernetes. If you deployed on AWS (using kops or some other tool) it'll create an elastic load balancer to expose the service. If you deployed on GCP it'll do something similar - Google terminology escapes me at the moment. These are separate VMs in the cloud routing traffic to your service. If you're playing around in minikube LoadBalancer doesn't really do anything, it does a node port with the assumption that the user understands minikube isn't capable of providing a true load balancer.
LoadBalancer is supposed to expose your service via a brand new IP address. So this is what happens on the cloud providers, they requisition VMs with a separate public IP address (GCP gives a static address and AWS a DNS). NodePort will expose as a port on kubernetes node running the pod. This isn't a workable solution for a general deployment but works ok while developing.