Not able to access deployment from external network kubernetes - kubernetes

I am new to Kubernetes and have a limited understanding of the Kubernetes networking part.
problem context:
Minikube installed on VM (RHEL 8) [minikube start --driver=docker]
under minikube node have a deployment with the service (type: nodePort) with port mapping as 8008:30000
(refer to screenshot)
so now the confusion is I am able to access the application (in deployment) using curl < node-ip>:< node-port> inside the VM (x.y.z.w), but while trying to access from my external network, using URL as x.y.z.w:30000, I am not able to, what am I missing here?
or do I need to use ingress (with Nginx) here?

Related

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

kind cluster how to access a service using loadbalancer

I am deploying a k8s cluster locally using Kind. The image gets deployed ok and when I view the list of services I see the following
the service I'm trying to access is chatt-service and if you notice the EXTERNAL-IP is pending. I know minikube has a command which makes this accessible, but how do I do it on a Kind cluster ?
for Loadbalancer service type you will not able to get public ip because you're running it locally and you will need to run it in a cloud provider which will provide the LB for you like ALB in aws or LoadBalancer in Digital ocean. however, you can access this service locally using the Kubectl proxy tool.
.
kubectl port-forward service/chatt-service 3002:3002
There are some additional options to work on LoadBalancer under Kind cluster. (While the port forwarding is the simplest way).
https://kind.sigs.k8s.io/docs/user/loadbalancer/
First way:
You can also expose pods and services using extra port mappings
this mean manually set ports in cluster-config.yaml
And maybe second way (but not actually the solution on LoadBalancer):
You may want to check out the Ingress Guide as a cross-platform
workaround

How can k8s be aware of service running on host?

I am running Cockpit on my host, its running as systemd service, at the same time I am using microk8s (single node k8s).
I dont need reverse proxy to Cockpit web ui, but I need something like external service/endpoint k8s object mapping, .. to be k8s aware of that service.
k8s should not allow to create another nodePort service with same port ( k8s layer, it should not collide on OS layer )
k8s should handle rules for that service in iptables.
Btw: Cockpit must be running as standalone application directly on host.
Any ideas?
Thanks

Enable access to Kubernetes Dashboard without kubectl proxy

If I move a relevant config file and run kubectl proxy it will allow me to access the Kubernetes dashboard through this URL:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
However if I try to access the node directly, without kubectl proxy, I will get a 403 Forbidden.
http://dev-master:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Our kubernetes clusters are hidden inside a private network that users need to VPN in to; furthermore only some of us can talk to the master node of each of our clusters after authenticating to the VPN. As such, running kubectl proxy is a redundant step, and choosing the appropriate config file for each cluster is an additional pain, especially when we want to compare the state of different clusters.
What needs to be changed to allow "anonymous" HTTP access to the dashboard of these already-secured kubernetes master nodes?
You would want to set up a Service (either NodePort or LoadBalancer) for the dashboard pod(s) to expose it to the outside world (well, outside from the PoV of the cluster, which is still an internal network for you).

How do I find out the external IP of a Load Balancer service?

I am using Kubernetes Engine on the Google Cloud Platform. I have a pod running a process in a Docker scratch container. I also have a load balancer service that gives me access to the pod from the outside world.
The process running in the pod needs to know what its external IP address is. How can I get this?
Prior to using Kubernetes Engine I was using Compute Engine and could find the external IP address by the following:
curl -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
Are there any internal tools I can use that would be available to my process? Or would I need the process to call an external site that can mirror back the IP address?
Every Pod (unless configured not to do so) has valid kubernetes credentials in /var/run/secrets/kubernetes.io/serviceaccount/token as described here so the answer is to use the kubernetes API to ask the Service in front of the Pod(s) for its status:loadBalancer:ingress:ip: as described here which I have every reason to believe GKE will keep up-to-date with any changes to the load balancer. The kubernetes API is always(?) located at https://kubernetes (that's normally enough, or https://kubernetes.default.svc.cluster.local is its full name), so there should be very little configuration the Pod would need in order to carry out the lookup.
The asterisk to that response is that one must provide the name of the Service to the Pod(s) of the Service sitting in front of it, because (for the most part) there is no way for the Pod to know how many Services point to it.