Kubernetes: loadbalancer and ingress - kubernetes

In the following code, which URL will be exposed to the outside of kubernetes cluster. Is it 78.11.24.19 or 146.148.47.155?
I am trying to understand the loadbalancer and ingress here.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
loadBalancerIP: 78.11.24.19
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 146.148.47.155

This is nicely explained on Create an External Load Balancer section Finding your IP address
.
You can find the IP address created for your service by getting the service information through kubectl:
kubectl describe services example-service
which should produce output like this:
Name: example-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=example
Type: LoadBalancer
IP: 10.67.252.103
LoadBalancer Ingress: 192.0.2.89
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32445/TCP
Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80
Session Affinity: None
Events: <none>
The IP address is listed next to LoadBalancer Ingress.
Update:
It's explained at Object Spec and Status:
Every Kubernetes object includes two nested object fields that govern the object’s configuration: the object spec and the object status. The spec, which you must provide, describes your desired state for the object–the characteristics that you want the object to have. The status describes the actual state of the object, and is supplied and updated by the Kubernetes system. At any given time, the Kubernetes Control Plane actively manages an object’s actual state to match the desired state you supplied.`

Related

Manually created EndpointSlice resource not associated with Service

I'm trying to create a Service on cluster A that points to the IP address of cluster B. I do not have a domain name for cluster B, so can't use ExternalName. The way that I'm trying to do this is by creating a Service without a selector on cluster A and manually creating an EndpointSlice resource for that service which will point to cluster B. According to Kubernetes documentation, I need to "link an EndpointSlice to a Service by setting the kubernetes.io/service-name label on that EndpointSlice." But even after doing so, my service apparently has no endpoints.
Code
endpointslice.yaml
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: hack-svc-1
labels:
kubernetes.io/service-name: hack-svc
kubernetes.io/managed-by: manual
addressType: IPv4
ports:
- port: 80
endpoints:
- addresses:
- "cluster B's IPv4 address here"
conditions:
ready: true
service.yaml
apiVersion: v1
kind: Service
metadata:
name: hack-svc
spec:
ports:
- port: 80
After kubectl describe service hack-svc:
Name: hack-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: <IPv4 address here>
IPs: <IPv4 address here>
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: <none> <-- No endpoints??
Session Affinity: None
Events: <none>
How can I associate the EndpointSlice with my Service?
EndpointSlice API is a scalable and extensible alternative to the Endpoints API. EndpointSlices gathers information such as IP addresses, ports, readiness, and topology from the pods of a service. Follow this tutorial and verify whether there are any mismatches while configuring EndpointSlices for your clusters it helped in my case.

Unable to forward traffic using NodePort

I have an application running inside minikube K8 cluster. It’s a simple REST endpoint. The issue is after deployment I am unable to access that application from my local computer.
Using http://{node ip}:{node port} endpoint.
However, if I do:
kubectl port-forward (actual pod name) 8000:8000
The application becomes accessible at: 127.0.0.1:8000 from my local desktop.
Is this the right way?
I believe this isn't the right way? as I am directly forwarding my traffic to the pod and this port forwarding won't remain once this pod is deleted.
What am I missing here and what is the right way to resolve this?
I have also configured a NodePort service, which should handle this but I am afraid it doesn’t seem to be working:
apiVersion: v1
kind: Service
metadata:
labels:
app: rest-api
name: rest-api-np
namespace: rest-api-namespace
spec:
type: NodePort
ports:
- port: 8000
selector:
app: rest-api
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rest-api
name: rest-api-deployment
namespace: rest-api-namespace
spec:
replicas: 1
selector:
matchLabels:
app: rest-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rest-api
spec:
containers:
- image: oneImage:latest
name: rest-api
You are having issues because your service is placed in default namespace while your deployment is in rest-api-namespace namespace.
I have deploy you yaml files and when the describe the service there were no endpoints:
➜ k describe svc rest-api-np
Name: rest-api-np
Namespace: default
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.100.111.228
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31668/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Solution for that is to create service in the the same namespace. Once you do that, an ip address and port will appear in the Endpoints field:
➜ k describe svc -n rest-api-namespace rest-api-np
Name: rest-api-np
Namespace: rest-api-namespace
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.99.49.24
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 32116/TCP
Endpoints: 172.18.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Alternative way is to add endpoints manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service # please note that endpoints and service needs to have the same name
subsets:
- addresses:
- ip: 192.0.2.42 #ip of the pod
ports:
- port: 8000
Since you can do port forwarding the rest api service properly connected to your deployment. In that case the service can be resolved using the following way.
First find out the minikube ip
minikube ip
Then the node port of your service like
kubectl get service rest-api-np
Once you have these two details just do http://(minikube-ip):(node-port)

How to explicitely define an Endpoint of an Kubernetes Service

I've provisioned a kubernetes cluster on my own couple of virtual machines via kubespray. Kubespray uses project-calico as default network-plugin which fits my requirements of proxying services in the cluster network to the outer world pretty well.
Kubespray deploys the apiserver itself as a ClusterIP Service. To make it reachable from outside it defines an Endpoint of this service with the master nodes Host IP Adress, which is routed to the internal ClusterIP by Calico as far as I could figure it out by myself.
My Question is: How is it possible to define my own endpoint (for another service), as these get implicietly defined already by provisioning the service.yaml and cannot be overwritten. I would like to follow a similar approach to get my Rook/Ceph Dashboard visible from outside the cluster.
EDIT: Note that kubectl get ingresses.networking.k8s.io --all-namespaces returns No resources found. and kubectl describe service kubernete returns
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.233.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.103.254:6443
Session Affinity: None
Events: <none>
I'll refer to your question:
How is it possible to define my own endpoint?
You'll have to:
1 ) Create a Service without a Pod selector:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 9376
(At this point, no auto-generated Endpoints will be created by K8S because it can't decide to which pods those Endpoints should be referring).
2 ) Crate an Endpoints object and map it to the desired network address and port where the external resource is running:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.0.2.45
ports:
- port: 9376
(*) Notice that there should be a match between the service name and the name of the Endpoints object.
I am not exactly sure if what You mean but i think what You are looking for is ability to expose services externally.
You can expose Your services like Rook/Ceph Dashboard with "Publishing Services" (service types that expose internal services externally).
As quoted from kubernetes documentation:
For some parts of your application (for example, frontends) you may
want to expose a Service onto an external IP address, that’s outside
of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service
you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the
cluster. This is the default ServiceType.
NodePort:
Exposes the Service on each Node’s IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort
Service routes, is automatically created. You’ll be able to contact
the NodePort Service, from outside the cluster, by requesting
<NodeIP>:<NodePort>.
LoadBalancer:
Exposes the Service externally using a cloud provider’s load balancer.
NodePort and ClusterIP Services, to which the external load
balancer routes, are automatically created.
ExternalName:
Maps the Service to the contents of the externalName field (e.g.
foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Here is an example from documentation.
You can also define the Services with yaml manifests like this:
apiVersion: v1
kind: Service
metadata:
name: examplelb
spec:
type: LoadBalancer
selector:
app: asd
ports:
-
name: koala
port: 22223
targetPort: 22225
nodePort: 31913
-
name: grisly
port: 22224
targetPort: 22226
nodePort: 31914
-
name: polar
port: 22225
targetPort: 22227
nodePort: 31915
This makes pods with label: app: asd have following ports exposed with pattern
internal port 22223 exposed on 31913.
$ kubectl get svc examplelb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
examplelb LoadBalancer 10.111.8.204 <pending> 22223:31913/TCP,22224:31914/TCP,22225:31915/TCP 7d2h
If service with type LoadBalancer has External-IP pending you can still access all those ports on each node as NodePort.
Hope this helps.

Unable to connect to external load balancer even after exposing service in kubernetes

I have the following deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: family-tree-deployment
labels:
app: familytree
spec:
replicas: 1
selector:
matchLabels:
app: familytree
template:
metadata:
labels:
app: familytree
spec:
containers:
- name: familytree
image: index.docker.io/koustubh/familytree:v1.0
ports:
- containerPort: 8080
I could successfully create the deployment using kubectl create -f deploy.yml
Now, I simply exposed this deployment with the following command
kubectl expose deployment family-tree-deployment --type=LoadBalancer --name=familytree-service
The service was successfully created.
The output is
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
familytree-service LoadBalancer 10.51.244.161 35.221.113.235 8080:30505/TCP 1h
$ kubectl describe svc familytree-service
Name: familytree-service
Namespace: default
Labels: app=familytree
Annotations: <none>
Selector: app=familytree
Type: LoadBalancer
IP: 10.51.244.161
LoadBalancer Ingress: 35.221.113.235
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30505/TCP
Endpoints: 10.48.4.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I could login to the pod and I made sure the service is working.
However, when I use the external ip of the load balancer and query my api, the connection times out.
I have made sure firewall allows port 8080.
My application is running on port 8080
The generated Service object looks perfectly valid, so we can exclude a label issue or a missing public IP address. Besides you can access your Service internally, which means the firewall rule was applied incorrectly, most likely.
Please ensure you allow incoming traffic as follows
from the internet to the load balancer on TCP port 8080
from the load balancer to all Kubernetes nodes on TCP port 30505

Why don't my Kubernetes services publish on the port I specify?

I've been tinkering with Kubernetes on and off for the past few years and I am not sure if this has always been the case (maybe this behavior changed recently) but I cannot seem to get Services to publish on the ports I intend - they always publish on a high random port (>30000).
For instance, I'm going through this walkthrough on Ingress and I create the following Deployment and Service objects per the instructions:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: "gokul93/hello-world:latest"
imagePullPolicy: Always
name: hello-world-container
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 9376
protocol: TCP
targetPort: 8080
selector:
app: hello-world
type: NodePort
According to this, I should have a Service that's listening on port 8080, but instead it's a high, random port:
~$ kubectl describe svc hello-world-svc
Name: hello-world-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=hello-world
Type: NodePort
IP: 10.109.24.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31669/TCP
Endpoints: 10.40.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I also verified that none of my nodes are listening on 8080, but they are listening on 31669.
This isn't super ideal - especially considering that the Ingress portion will need to know what servicePort is being used (the walkthrough references this at 8080).
By the way, when I create the Ingress controller, this behavior is the same - rather than listening on 80 and 443 like a good load balancer, it's listening on high random ports.
Am I missing something? Am I doing it all wrong?
Matt,
The reason a random port is being allocated is because you are creating a service of type NodePort.
K8s documentation explains NodePort here
Based on your config, the service is exposed on port 9376 (and the backend port is 8080). So hello-word-svc should be available at: 10.109.24.16:9376. Essentially this service can be reached in one of the following means:
Service ip/port :- 10.109.24.16:9376
Node ip/port :- [your compute node ip]:31669 <-- this is created because your service is of type NodePort
You can also query the pod directly to test that the pod is in-fact exposing a service.
Pod ip/port: 10.40.0.4:8080
Since your eventual goal is to use ingress controller for external reachability to your service, "type: ClusterIP" might suffice your ask.