Minikube service expose to public IP - kubernetes

I am learning Kubernetes and trying to deploy an app using MiniKube.
I have managed to expose the service mapped to nginx pod on Minikube IP. I can access the nginx service on url $(minikube ip):$(serviceport). which is fine, however I am looking to expose this to the public network. Currently this service is only accessible via my local machine, any other machine on my wifi network is not able to access it as it is exposed only on minikube ip. I dont want to forward the port in my local linux via IPtables, and I am looking for a built in solution to expose the port to world (and not just on minikube ip). I know it can be achieved as minikube dashboard by default expose the service on localhost, this implies that minikube can talk to other network adapters and can register the port, I am not sure how.
Here is my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginxservice
labels:
app: nginxservice
spec:
type: NodePort
ports:
- port: 80
name: http
targetPort: 80
nodePort: 32756
selector:
app: nginxcontainer

#subudear is right - you need Ingress.
An API object that manages external access to the services in a
cluster, typically HTTP. Ingress may provide load balancing, SSL
termination and name-based virtual hosting.
Ingress exposes HTTP and
HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress
resource.
To be able use regularly use ingress(Im not talking about minikube right now) - it is not enough simply create Ingress object. You should first install related ingress controller.
There are lot of them, most popular are:
NGINX Ingress Controller
Kubernetes Nginx Ingress Controller
Traefik
Istio Ingress Controller
First 2 are very similar, but use absolutely different annotations. It often happens people confuse them
Talking about minikube:
As per guidelines, in order to install ingress the only you have to do is
minikube addons enable ingress
Please note that by default, minikube installing exactly NGINX Ingress controller
nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m

You have to create ingress.
Follow the steps in this doc - https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

Related

No ExternalIP showing in kubernetes nodes?

I am running
kubectl get nodes -o yaml | grep ExternalIP -C 1
But am not finding any ExternalIP. There are various comments showing up about problems with non-cloud setups.
I am following this doc https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
with microk8s on a desktop.
If you setup k8s cluster on Cloud, Kubernetes will auto detect ExternalIP for you. ExternalIP will be a Load Balance IP address. But if you setup it on premise or on your Desktop. You can set External IP address by deploy your Load Balance, such as MetaLB.
You can get it here
In short:
From my answer Kubernetes Ingress nginx on Minikube fails.
By default all solutions like minikube does not provide you
LoadBalancer. Cloud solutions like EKS, Google Cloud, Azure do it for
you automatically by spinning in the background separate LB. Thats why
you see Pending status.
In your case most probably right decision to look into MicroK8s Add ons. There is a Add on: MetalLB:
Thanks #Matt with his MetalLB external load balancer on docker-desktop community edition on Windows 10 single-node Kubernetes Infrastructure answer ans researched info.
MetalLB Loadbalancer is a network LB implementation that tries to
“just work” on bare metal clusters.
When you enable this add on you will be asked for an IP address pool
that MetalLB will hand out IPs from:
microk8s enable metallb
For load balancing in a MicroK8s cluster, MetalLB can make use of
Ingress to properly balance across the cluster ( make sure you have
also enabled ingress in MicroK8s first, with microk8s enable ingress).
To do this, it requires a service. A suitable ingress service is
defined here:
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
# loadBalancerIP is optional. MetalLB will automatically allocate an IP
# from its pool if not specified. You can also specify one manually.
# loadBalancerIP: x.y.z.a
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
You can save this file as ingress-service.yaml and then apply it with:
microk8s kubectl apply -f ingress-service.yaml
Now there is a load-balancer which listens on an arbitrary IP and
directs traffic towards one of the listening ingress controllers.

Kubernetes - Expose Website using nginx-ingress

I have a website running inside a kubernetes cluster.
I can access it localy, but want to make it available over the internet. (I have a registered domain), but the external IP keeps pending
I worked with this instruction: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
This is the code for the service and ingress
kind: Service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: website
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.carina.bernrieder.de
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
So I'm using helm to install the nginx-controller, but after that Kubectl get all the external IP of the nginx controller keeps pending.
EXTERNAL-IP is expected to be pending in a non cloud environment such as minikube. You should be able to access the application using curl www.carina.bernrieder.de
Here is guide on using nginx ingress to expose an application on minikube
As #Arghya Sadhu mentioned, in local environment it is the expected behaviour. Maybe it will be easier to understand when you look a bit more deeply on how it works in cloud environments. Without going into details, if you apply an Ingress resource on GKE, EKS or AKS, a few more things happen "under the hood". A loadbalancer with an external IP is automatically created so your ingress can use it to forward external traffic to Pods deployed on your kubernetes cluster.
Minikube doesn't have such capabilities as it cannot make any call to any API for additional infrastructure resources to be created, as it happens on cloud environments.
But let's start from the beginning. You didn't mention in your question anything about your external IP or domain configuration. If you don't have an external static IP to which your domain has been redirected, it have no chances to work anyway.
As to this point, I won't fully agree:
You should be able to access the application using curl
www.carina.bernrieder.de
Yes, you will be able to access it via your domain (actually via any domain that you don't even need to own) provided you add the following entry in your /etc/hosts file so DNS won't be used and it will be resolved based on this locally defined mapping:
172.17.0.15 www.carina.bernrieder.de
As you can read here:
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
But keep in mind that both those IPs will be private IPs. The one, that is displayed within the ingress list will be internal cluster ip and the external one will be extarnal only from your Minikube cluster perspective. It will be still the IP in your local network assigned to your Minikube vm.
And as you said in your question you want to make it available over the Internet. As you can see it has no chances to work without additional configuration.
Another important thing. You didn't mention where your Minikube is actually installed, so I guess you set it up on your local computer and most probably you're behind NAT router. If this is your case, it won't be so easy to expose it on a public internet. You will need to configure proper port forwarding rules on your router and of course you need a static IP or you need to configure dynamic DNS to be able to access your computer on the Internet via your dynami public IP.
Minikube was designed mainly for playing locally with kubernetes and not for production environments. Of course you can use it to run your small app, but then you may think about installing it on a VM in a cloud environment or some sort of VPS server.

Is it possible to serve up applications through a Kubernetes controller node?

I have built a K3s (https://k3s.io) cluster on a set of Raspberry Pi4 computers.
The controller (ctrl-1) node is a gateway in that it has 2 network interfaces. One is connected to my LAN and the other is connected to a network that it creates, e.g. K3S-LAN. The two nodes (node-1 and node-2) are deployed to the K3S-LAN.
I want to be able to access the applications running on the nodes through ctrl-1, e.g. from the LAN. This is because this cluster is meant to be portable so only the ctrl-1 node needs to be connected to the guest LAN. (Yes there are issues with DNS names etc to be sorted out, but I want to get the basics running first).
This means that I need to be able to "proxy" the ingress through ctrl-1. I thought I had the right idea for this in that I deployed "nginx-ingress" to the master, using Helm. However I forgot about the service for this - this has been scheduled on the nodes, whereas it needs to be on the controller so that the ports are opened up (I think). However I cannot find how to make the service run on the controller.
At the moment I have the service running with a type of NodePort. I could install MetalLB so that I have LoadBalancer capabilities. However with what I have seen I am not sure if this would help or not.
ctrl-1 does not have any taints setup on it, just the role of master.
Am I barking up the wrong tree here? I guess this might not be the intended use case of Kubernetes, but I am playing around with an idea. Thanks for any ideas that people have.
Update*
I have just thought that the way around this might be to run HAProxy on ctrl-1 (as another service on the host) and setup rules to proxy to the necessary services within the cluster. That would act as the bridge between the networks.
You just need to expose your pod via a Nodeport type service and it can be accessed via http://master-node-ip:nodeport. Make sure that kube-proxy is running on all master and worker nodes.
The ingress approach also should work as long as you have kube-proxy running on your master. You deploy nginx ingress on your cluster and it will get deployed into a worker node. Then you can expose nginx ingress controller itself using a NodePort service. After this you can create ingress resource for configuring the nginx ingress controller to route traffic to your backend pods and services running on worker nodes. The services for backend pods should be of type ClusterIP.
Deploy nginx ingress controller and expose it via NodePort service using kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/baremetal/service-nodeport.yaml
Deploy nginx pod(nginx is an example..this should be your pod) kubectl run nginx --generator=run-pod/v1 --image=nginx
Expose nginx pod via ClusterIP service
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
Create ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
With above setup I can now access nginx and get "Welcome to nginx! " via http://master-node-ip:NodePort of nginx ingress controller

How to expose kubernetes service on prem using 443/80

Is it possible to expose Kubernetes service using port 443/80 on-premise?
I know some ways to expose services in Kubernetes:
1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.
2. Host network - Force the pod to use the host’s network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.
3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option.
MetalLB which allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, is not yet stable enough.
Do you know any other way to expose service in Kubernetes using port 443/80 on-premise?
I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.
Thanks.
IMHO ingress is the best way to do this on prem.
We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.
Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding ingress.
Here's an example of the daemonset manifest:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
spec:
selector:
matchLabels:
component: ingress-controller
template:
metadata:
labels:
component: ingress-controller
spec:
restartPolicy: Always
hostNetwork: true
containers:
- name: nginx-ingress-lb
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
Use ingress controller as an entrypoint to a services in kubernetes cluster. Run ingress controller on port 80 or 443.
You need to define ingress rules for each backend service that you want to access from outside. Ingress controller should be able to allow client to access the services based on the paths defined in the ingress rules.
If you need to allow access over https then you need to get the dns certificates, load them into secrets and bind them in the ingress rules
Most popular one is nginx ingress controller. Traefik and ha proxy ingress controllers are also other alternate solutions
Idea with hostNetwork proxy is actually not bad, Openshift Router uses that for example. You define two or three nodes to run proxy and use DNS load balancing in front of them.
And you can still use kube-dns with hostNetwork, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
You are probably running a kubeadm on-premise Kubernetes setup with a nginx ingress controller on unix/linux hosts and can't safely expose ports in the restricted system port range (0-1023).
You either need to set up your own dedicated load balancer pair (e.g. a Linux boxes with HA-Proxy running) or alternatively use an existing load balancers if you are lucky engough being in a corporate environment that already provides load balancing (e.g. F5 LB).
Then you will be able to set the load balancers to forward your 443/80 requests to your cluster node's 30443/30080 ports that are handled by your cluster's ingress controller.

Provide Users access to applications installed in their namespaces

I need to create a k8s cluster with user having their own namespace and application installed in those namespace which they can access from a web-portal(e.g providing http://service_ip:service_port in case of jupyterhub) i am using helm charts to install applications and kind of confused with services types so i need your suggestion should i use nodeport or should i use clusterip and how i would discover and provide service url to users. any help would be appreciated.
Steps
Find the Service defined for the application.
Expose the Service either via either NodePort, LoadBalancer, or Ingress.
Reference
Kubernetes in Action Chapter 5. Services: enabling clients to discover and talk to pods
The diagrams are from the book:
NodePort
If the client can access the nodes directly or via tunnel (VPN or SSH tunnel), the expose the service as NodePort type.
To do so, use kubectl expose or kubectl edit to change the spec.type.
Example:
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
spec:
clusterIP: 10.100.96.203
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP <----- Change to NodePort (or LoadBalancer)
LoadBalancer
If the K8S is running in AWS, Azure, GCE, for which the K8S cloud providers are supported, then the service can be exposed via the load balancer DNS or IP (can be via the public Internet too, depending on the access configuration on the LB). Change the service spec.type to LoadBalancer.
For AWS cloud provider, refer to K8S AWS Cloud Provider Notes.
Ingress
K8S ingress offers a way to access via hostname and TLS. Similar to OpenShift Route.