Expose deployment via service type Node Port on digital Ocean Kubernetes - kubernetes

I'm implementing a solution in Kubernetes for several clients, and I want to monitoring my cluster with Prometheus. However, because this can scale quickly, and I want to reduce costs, I will use Federation for Prometheus, to scrape different clusters of Kubernetes, but I need to expose my Prometheus deployment.
I already have that working with a service type LoadBalancer exposing my Prometheus deployment, but this approach add this extra expense to my infra structure (Digital Ocean LB).
Is it possible to make this using a service type NodePort, exposing a port to my Cluster IP, something like this:
XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090
Where I can use this URL to my master Prometheus scrappe all "slaves" Prometheus instances?
I already tried, but I can't reach my cluster port. Something is blocking. I also delete my firewall, to ensure that nothing is interferes in this implementation but nothing.
This is my service:
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Can anybody help me please?
---
This is my service:
```kubectl describe service my-nodeport-service
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

You can then set up host XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090 to act as your load balancer with Nginx.
Try setup Nginx TCP load balancer.
Note: You will be using Nginx stream and if you want to use open source Nginx and not Nginx Plus, then you might have to compile your own Nginx with -with-stream option.
Example config file:
events {
worker_connections 1024;
}
stream {
upstream stream_backend {
server dhcp-180.example.com:446;
server dhcp-185.example.com:446;
server dhcp-186.example.com:446;
server dhcp-187.example.com:446;
}
server {
listen 446;
proxy_pass stream_backend;
}
After runing Nginx, test results should be like:
Host lb.example.com acts as load balancer with Nginx.
In this example Ngnix is configured to use round-robin and as you can see, every time a new connection ends up to a different host/container.
Note: the container hostname is same as the node hostname this is due to the hostNetwork.
There are some drawbacks of this solution like:
defining hostNetwork reserves the host’s port(s) for all the containers running in the pod
creating one load balancer you have single point of failure
every time a new node is added or removed to the cluster, the load balancer should be updated
This way, one could set up a kubernetes cluster to Ingress-Egress TCP connections routing from/to outside of the cluster.
Useful post: load-balancer-tcp.
NodePort documentation: nodePort.

Related

How to achieve functionality of headless service + port translation of the FQDNs that are created by that headless service in kuberentes

What's the best way to create a functionality of headless service + port translation of the FQDNs that are created by that headless service?
I have created this headless service where the selector is a stateful set with replica=3. Basically creating the headless service to get the FQDN functionality. But I also want the port translation for these FQDNS. For example, when some internal microservice uses :443 it should be routed to :7443 on the pod. What's the best way to achieve that?
Name: sbijendra-service
Namespace: avictrl-aviproject-aviclusterinstaller-system
Labels: app.kubernetes.io/instance=sbijendra
Annotations: <none>
Selector: app.kubernetes.io/instance=sbijendra-sts,app.kubernetes.io/kind=sbijendra-statefulset,app.kubernetes.io/name=sbijendra
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: https 443/TCP
TargetPort: 7443/TCP
Endpoints: 100.109.187.135:7443,100.111.94.144:7443,100.127.8.17:7443
Session Affinity: None
Events: <none>
I added the port and target port in the headless service itself but looks like it is not going to work.

Kubernetes service responding on different port than assigned port

I've deployed few services and found one service to be behaving differently to others. I configured it to listen on 8090 port (which maps to 8443 internally), but the request works only if I send on port 8080. Here's my yaml file for the service (stripped down to essentials) and there is a deployment which encapsulates the service and container
apiVersion: v1
kind: Service
metadata:
name: uisvc
namespace: default
labels:
helm.sh/chart: foo-1
app.kubernetes.io/name: foo
app.kubernetes.io/instance: rb-foo
spec:
clusterIP: None
ports:
- name: http
port: 8090
targetPort: 8080
selector:
app.kubernetes.io/component: uisvc
After installing the helm, when I run kubectl get svc, I get the following output
fooaccess ClusterIP None <none> 8888/TCP 119m
fooset ClusterIP None <none> 8080/TCP 119m
foobus ClusterIP None <none> 6379/TCP 119m
uisvc ClusterIP None <none> 8090/TCP 119m
However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. The container is running a spring boot application which by default listens on 8080. The only explanation I could come up with is somehow the port/targetPort is being ignored in this config and other pods are directly reaching the spring service inside.
Is this behaviour correct? Why is it not listening on 8090? How should I make it work this way?
Edit: Output for kubectl describe svc uisvc
Name: uisvc
Namespace: default
Labels: app.kubernetes.io/instance=foo-rba
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rba
helm.sh/chart=rba-1
Annotations: meta.helm.sh/release-name: foo
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/component=uisvc
Type: ClusterIP
IP: None
Port: http 8090/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.8:8080
Session Affinity: None
Events: <none>
This is expected behavior since you used headless service.
Headless Services are used for service discovery mechanism so instead of returning single DNS A records, the DNS server will return multiple A records for your service each pointing to the IP of an individual pods that backs the service. So you do simple DNS A records lookup and get the IP of all of the pods that are part of the service.
Since headless service doesn't create iptables rules but creates dns records instead, you can interact directly with your pod instead of a proxy. So If you resolve <servicename:port> you will get <podN_IP:port> and then your connection will go to the pod directly. As long as all of this is in the same namespace you don`t have resolve it by full dns name.
With several pods, DNS will give you all of them and just put in the random order (or in RR order). The order depends on the DNS server implementation and settings.
For more reading please visit:
Services-netowrking/headless-services
This stack questions with great answer explaining how headless services work

EKS ELB: odd instances in the list

I've configured application to run on 2 ec2 instances and k8s service type = LoadBalancer for this application (Selector:app=some-app). Also, I have 10+ instances running in EKS cluster. According to the service output - everything is ok:
Name: some-app
Namespace: default
Labels: app=some-app
Annotations: external-dns.alpha.kubernetes.io/hostname: some-domain
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: app=some-app
Type: LoadBalancer
IP: 172.20.206.150
LoadBalancer Ingress: internal-blablabla.eu-west-1.elb.amazonaws.com
Port: default 80/TCP
TargetPort: 80/TCP
NodePort: default 30633/TCP
Endpoints: 10.30.21.238:80,10.30.22.38:80
Port: admin 80/TCP
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But when I check AWS console I see that all instances are included (10+) into ELB. (if I use Application load balancer - only 2 instances are present)
Is there any configuration to remove odd instances?
Thats the default behaviour for the elb/nlb, once traffic hits the instances kube-proxy will redirect it to the instances with your pods running.
If you're using the alb ingress controller, then again its standard behaviour, it will only add the instances were your pods are running, skipping the iptables mumbo jumbo ;)

Can't get traefik on kubernetes to working with external IP in a home lab

I have a 3 node Kubernetes cluster running at home. I deployed traefik with helm, however, it never gets an external IP. Since this is in the private IP address space, shouldn't I expect the external IP to be something in the same address space? Am I missing something critical here?
$ kubectl describe svc traefik --namespace kube-system
Name: traefik
Namespace: kube-system
Labels: app=traefik
chart=traefik-1.64.0
heritage=Tiller
release=traefik
Annotations: <none>
Selector: app=traefik,release=traefik
Type: NodePort
IP: 10.233.62.160
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31111/TCP
Endpoints: 10.233.86.47:80
Port: https 443/TCP
TargetPort: httpn/TCP
NodePort: https 30690/TCP
Endpoints: 10.233.86.47:8880
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl get svc traefik --namespace kube-system -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik NodePort 10.233.62.160 <none> 80:31111/TCP,443:30690/TCP 133m
Use MetalLB, to get an LB IP. More here on their site.
external IP works in external cloud provider platform like Google CLoud Platform.
In your case, you can access traefik service at the below url
node-host:nodeport
http://<hostname-of-worker-node>:31111
As seen in the outputs, type of your service is NodePort. With this type no external ip is exposed. Here is the definition from official documentatin:
If you set the type field to NodePort, the Kubernetes master will
allocate a port from a range specified by --service-node-port-range
flag (default: 30000-32767), and each Node will proxy that port (the
same port number on every Node) into your Service. That port will be
reported in your Service’s .spec.ports[*].nodePort field.
If you want to reach your service from external you have to use ip address of your computer and the port that Kubernetes exposed like this:
http://IP_OF_YOUR_COMPUTER:31111
You can read this page for details.

Access to service IP from the pod

I have a pod with mysql and service to provide access from outside. So I can connect to my database at 192.168.1.29:3306 from the other machine.
But how I can connect from the other pod in the same cluster (same node)?
That is my service description:
Name: etl-mysql
Namespace: default
Labels: run=etl-mysql
Annotations: field.cattle.io/publicEndpoints=[{"addresses":["192.168.1.20"],"port":31211,"protocol":"TCP","serviceName":"default:etl-mysql","allNodes":true}]
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"etl-mysql"},"name":"etl-mysql","namespace":"default"},"spec":{"extern...
Selector: run=etl-mysql
Type: NodePort
IP: 10.43.44.58
External IPs: 192.168.1.29
Port: etl-mysql-port 3306/TCP
TargetPort: 3306/TCP
NodePort: etl-mysql-port 31211/TCP
Endpoints: 10.42.1.87:3306
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Kubernetes has a built in DNS, that registers services automatically resulting in simple to use DNS address like this: http://{servicename}.{namespace}:{servicePort}
If you are in the same namespace you can omit the namespace part and if your service listens on port 80 that part can be omitted as well.
If you need further informations the following documentation will help you: DNS for Services and Pods