Exposing LoadBalancer service in minikube at arbitrary port? - kubernetes

I have a minikube cluster with a running WordPress in one deployment, and MySQL in another. Both of the deployments have corresponding services. The definition for WordPress service looks like this:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
type: LoadBalancer
The service works fine, and minikube service gives me a nice path, with an address of minikube ip and a random high port. The problem is WordPress needs a full URL in the name of the site. I'd rather not change it every single time and have local DNS name for the cluster.
Is there a way to expose the LoadBalancer on an arbitrary port in minikube? I'll be fine with any port, as long as it's port is decided by me, and not minikube itself?

Keep in mind that Minikube is unable to provide real loadbalancer like different cloud providers and it merely simulates it by using simple nodePort Service instead.
You can have full control over the port that is used. First of all you can specify it manually in the nodePort Service specification (remember it should be within the default range: 30000-32767):
If you want a specific port number, you can specify a value in the
nodePort field. The control plane will either allocate you that port
or report that the API transaction failed. This means that you need to
take care of possible port collisions yourself. You also have to use a
valid port number, one that’s inside the range configured for NodePort
use.
Your example may look as follows:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
targetPort: 80
nodePort: 30000
type: NodePort
You can also change this default range by providing your custom value after --service-node-port-range flag when starting your kube-apiserver.
When you use kubernetes cluster set up by kukbeadm tool (Minikube also uses it as a default bootstrapper), you need to edit /etc/kubernetes/manifests/kube-apiserver.yaml file and provide the required flag with your custom port range.

Related

Why Do I Need a NodePort in My Local Kubernetes Cluster?

Excuse my relative networking ignorance, but I've read a lot of docs and still have trouble understanding this (perhaps due to lack of background in networks).
Given this Dockerfile:
from node:lts-slim
RUN mkdir /code
COPY package.json /code/
WORKDIR /code
RUN npm install
COPY server.js /code/
EXPOSE 3000
CMD ["node", "server.js"]
...this deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
template:
metadata:
labels:
app: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-k8s
ports:
- containerPort: 3000
protocol: TCP
and this service:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
My understanding is that:
The app in my container is exposing itself to the outside world on 3000
my deployment yaml is saying, "the container is listening on 3000"
my service is saying map 3000 internally to port 80, which is the default port, so you don't have to add the port to the host.
I'm using the NodePort type because on local clusters like Docker Desktop it works out of the box instead of LoadBalancer. It opens up a random port on every node (pod?) to the outside in the cluster between 30000–32767. That node port is how I access my app from outside. E.g. localhost:30543.
Are my assumptions correct? I am unclear why I can't access my app at localhost:80, or just localhost, if the service makes the mapping between the container port and the outside world? What's the point of the mapping between 3000 and 80 in the service?
In short, why do I need NodePort?
There are two networking layers, which we could call "inside the cluster" and "outside the cluster". The Pod and the Service each have their own IP address, but these are only inside the cluster. You need the NodePort to forward a request from outside the cluster to inside the cluster.
In a "real" Kubernetes cluster, you'd make a request...
...to http://any-kubernetes-node.example.com:31245/, with a "normal" IP address in the way you'd expect a physical system to have, connecting to the NodePort port, which forwards...
...to http://web-service.default.svc.cluster.local:80/, with a cluster-internal IP address and the service port, which looks at the pods it selects and forwards...
...to http://10.20.30.40:3000/, using the cluster-internal IP address of any of the matching pods and the target port from the service.
The containerPort: in the pod spec isn't strictly required (but if you give it name: http then you can have the service specify targetPort: http without knowing the specific port number). EXPOSE in the Dockerfile means pretty much nothing in this sequence.
This sequence also gives you some flexibility in not needing to know where things are running. Say you have 100 nodes and 3 replicas of your pod; the initial connection can be to any node, and the service will forward to all of the target pods, without you needing to know any of these details from the caller.
(For completeness, a LoadBalancer type service requests that a load balancer be created outside the cluster; for example, an AWS ELB. This forwards to any of the cluster nodes as in step 1 above. If you're not in a cloud environment and the cluster doesn't know how to create the external load balancer automatically, it's the same as NodePort.)
If we reduce this to a local Kubernetes installation (Docker Desktop, minikube, kind) the only real difference is that there's only one node; the underlying infrastructure is still built as though it were a multi-node distributed cluster. How exactly you access a service differs across these installations. In Docker Desktop, from the host system, you can use localhost as the "normal" "external" node IP address in the first step.

Access to service API on Kubernetes

How to get services endpoints on Kubernetes cluster, the services are type:
ClusterIP.
Is there another way from port forwarding?
I want to create some API tests but the port forwarding is closing after a few minutes and I should restart it often which is not good.
you can use node-port for a testing scenario or else if you can't do it
you can : kubectl proxy or kubectl port-forward svc/<service name> <Port number>
if your port forwarding getting closed in 5 min you can increase the time.
you can specify the streaming-connection-idle-timeout. E.g. --streaming-connection-idle-timeout=1h to set it 1 hour.
however, still, port-forwarding is mainly for debugging short term issues, for long periods use node-port only using which you can connect directly.
Example yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
you can update the service name as per need.

EKS Load Balancer IP Not Found

I'm trying to use a load balancer to expose a service I have running on an EKS pod. My service is defined in a yaml like this:
kind: Service
apiVersion: v1
metadata:
name: mlflow-server
namespace: default
labels:
app: mlflow-server
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: mlflow-server
ports:
- name: http
port: 88
targetPort: http
- name: https
port: 443
targetPort: https
This is to define a service for a pod that I have mlflow server running on. When I apply this and access the external IP generated for the service, I get a This site can’t be reached webpage error. Is there something I'm missing with exposing my service as a load balanced service to access the mlflow ui?
For a basic Loadbalancer type service you do not need the annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb this creates the network load balancer. Now if you need it to be an NLB then there might be following problems:
The nlb takes few minutes to come up when you apply the setting. If you check it just after you deploy it it will not be able to accept the traffic. Please do check if the intended network loadbalancer is up in your AWS-EC2console > Loadbalancer tab.
The second problem that is more likely to happen is that the NLB is can be attached with only some instance types only. To check that you can go through the following link.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets
So if you actually do not have the need of network loadbalancer remove the annotation as the nlb has an higher charge as well. But, if that is the dire requirement do check with the second option if the instances that you are using on AWS are compatible with Network LoadBalancer.

Assign ExternalIP of LoadBalancer to Deployment as ENV variable

I have very specific case when my Pod should access to another LoadBalancer service via an ExternalIP.
Is there any way to assign LoadBalancer ExternalIP as an ENV variable to Deployment.yaml?
Thank you in advance!
I don't think this is directly possible in any of the standard templating tools. Part of the problem is that creating a cloud-hosted load balancer is an asynchronous operation, so that external-IP value won't be available until some time after kubectl apply (or the equivalent helm install) has finished.
If you can create the Service in advance then you can hard-code its external IP address or host name into other configuration, but this is intrinsically two steps. (If you're bought into Kubernetes operators, this should be possible with custom code: watch the Service, and once it gets its external address, create a corresponding ConfigMap that holds the address.)
Depending on your specific use case it may also work to just target the LoadBalancer Service within your cluster, the same as any other Service. This won't go out through the cloud provider's load-balancer tier, but it should be indistinguishable otherwise.
I found the way how to do it but #David Maze was perfectly right - there is no straight way how to do it.
So, my solution to add DNS with public and private zones:
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
labels:
app.kubernetes.io/name: nginx-lb
annotations:
external-dns.alpha.kubernetes.io/hostname: mycoolservice.{{ .Values.dns_external_zone }}.
external-dns.alpha.kubernetes.io/zone-type: public,private
external-dns.alpha.kubernetes.io/ttl: "1"
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: https
- name: http
port: 80
targetPort: http
selector:
app.kubernetes.io/name: nginx

Kubernetes service with external name curl

Well, I created kubernetes-service.yaml file, now i suppose, that on the port 8081 my backend service will be exposed under the domain of my.backend.com. I would like to check whether its accessible, however I have it available only within a cluster. How do I do that? I dont want to expose service externally, I just want to make curl my.backend.com inside a cluster to check results. Is there any workaround of that?
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
app: backend
spec:
type: ExternalName
selector:
app: backend
ports:
- protocol: TCP
port: 8081
targetPort: 8080
externalName: my.backend.com
The service itself is only exposed within the cluster, however, the FQDN my.backend.com is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path. I'm sure what you are trying to do, but it's not a change you make at the cluster level.