Can't get traefik on kubernetes to working with external IP in a home lab - kubernetes

I have a 3 node Kubernetes cluster running at home. I deployed traefik with helm, however, it never gets an external IP. Since this is in the private IP address space, shouldn't I expect the external IP to be something in the same address space? Am I missing something critical here?
$ kubectl describe svc traefik --namespace kube-system
Name: traefik
Namespace: kube-system
Labels: app=traefik
chart=traefik-1.64.0
heritage=Tiller
release=traefik
Annotations: <none>
Selector: app=traefik,release=traefik
Type: NodePort
IP: 10.233.62.160
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31111/TCP
Endpoints: 10.233.86.47:80
Port: https 443/TCP
TargetPort: httpn/TCP
NodePort: https 30690/TCP
Endpoints: 10.233.86.47:8880
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl get svc traefik --namespace kube-system -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik NodePort 10.233.62.160 <none> 80:31111/TCP,443:30690/TCP 133m

Use MetalLB, to get an LB IP. More here on their site.

external IP works in external cloud provider platform like Google CLoud Platform.
In your case, you can access traefik service at the below url
node-host:nodeport
http://<hostname-of-worker-node>:31111

As seen in the outputs, type of your service is NodePort. With this type no external ip is exposed. Here is the definition from official documentatin:
If you set the type field to NodePort, the Kubernetes master will
allocate a port from a range specified by --service-node-port-range
flag (default: 30000-32767), and each Node will proxy that port (the
same port number on every Node) into your Service. That port will be
reported in your Service’s .spec.ports[*].nodePort field.
If you want to reach your service from external you have to use ip address of your computer and the port that Kubernetes exposed like this:
http://IP_OF_YOUR_COMPUTER:31111
You can read this page for details.

Related

kubernetes LoadBalancer service target port set as random in GCP instead of as configured

This is the simplest config straight from the docs, but when I create the service, kubectl lists the target port as something random. Setting the target port to 1337 in the YAML:
apiVersion: v1
kind: Service
metadata:
name: sails-svc
spec:
selector:
app: sails
ports:
- port: 1337
targetPort: 1337
type: LoadBalancer
And this is what k8s sets up for services:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP <X.X.X.X> <none> 443/TCP 23h
sails LoadBalancer <X.X.X.X> <X.X.X.X> 1337:30203/TCP 3m6s
svc-postgres ClusterIP <X.X.X.X> <none> 5432/TCP 3m7s
Why is k8s setting the target port to 30203, when I'm specifying 1337? It does the same thing if I try other port numbers, 80 gets 31887. I've read the docs but disabling those attributes did nothing in GCP. What am I not configuring correctly?
Kubectl get services output includes Port:NodePort:Protocol information.By default and for convenience, the Kubernetes control plane will allocate a port from a range default: 30000-32767(Refer the example in this documentation)
To get the TargetPort information try using
kubectl get service <your service name> --output yaml
This command shows all ports details and stable external IP address under loadBalancer:ingress:
Refer this documentation from more details on creating a service type loadbalancer
Maybe this was tripping me up more that it should have due to some redirects I didn't realize that were happening, but ironing out some things with my internal container and this worked.
Yields:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 28h
sails LoadBalancer 10.3.253.83 <X.X.X.X> 1337:30766/TCP 9m59s
svc-postgres ClusterIP 10.3.248.7 <none> 5432/TCP 12m
I can curl against the EXTERNAL-IP:1337. The internal target port was what was tripping me up. I thought that meant my pod needed to open up to that port and pod applications were supposed to bind to that port (i.e. 30766), but that's not the case. That port is some internal port mapping to the pod I still don't fully understand yet, but the pod still gets external traffic on port 1337 to the pod's 1337 port. I'd like to understand what's going on there better, as I get more into the k8s Networking section of the docs, or if anyone can enlighten me.

Kubernetes - Curl a Cluster-IP Service

I'm following this kubernetes tutorial to create a service https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service
I'm using minikube on my local environment. Everything works fine but I cannot curl my cluster IP. I have an operation timeout:
curl: (7) Failed to connect to 10.105.7.117 port 80: Operation timed out
My kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d17h
my-nginx ClusterIP 10.105.7.117 <none> 80/TCP 42h
It seems that I'm having the same issue that this guys here who did not find any answer to his problem: https://github.com/kubernetes/kubernetes/issues/86471
I have tried to do the same on my gcloud console but I have the same result. I can only curl my external IP service.
If I understood well, I'm suppose to be already in my minikube local cluster when I start minikube, so for me I should be able to curl the service like it is mention in the tutorial.
What I'm doing wrong?
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster. That is why you cannot access your service via ClusterIP from outside the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example
spec:
type: NodePort
selector:
app: example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
Then execute command:
$ kubectl get svc --namespace=example
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ui NodePort yy.zz.xx.xx <none> 8080:30960/TCP 1d
Get minikube ip to get the nodeIP
$ minikube ip
aa.bb.cc.dd
then you can curl it:
curl http://aa.bb.cc.dd:8080
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
kind: Service
apiVersion: v1
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: LoadBalancer
externalIPs:
- <your minikube ip>
then you can curl it:
$ curl http://yourminikubeip:8080/
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns. The service itself is only exposed within the cluster, however, the FQDN external-name is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path.
See more: esposing-services-kubernetes.
ClusterIP is only available inside the kubernetes network.
If you want to be able to hit this from outside of the cluster use a LoadBalancer to expose a public IP that you can then access from outside of the cluster
Or..
kubectl port-forward <pod_name> 8080:80
then curl
curl http://localhost:8080
which will route through the port-forward to port 80 of the pod.

expose private kubernetes cluster with NodePort type service

I have created a VPC-native cluster on GKE, master authorized networks disabled on it.
I think I did all things correctly but I still can't access to the app externally.
Below is my service manifest.
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: '3000'
port: 80
targetPort: 3000
protocol: TCP
nodePort: 30382
selector:
io.kompose.service: app
type: NodePort
The app's container port is 3000 and I checked it is working from logs.
I added firewall to open the 30382port in my vpc network too.
I still can't access to the node with the specified nodePort.
Is there anything I am missing?
kubectl get ep:
NAME ENDPOINTS AGE
app 10.20.0.10:3000 6h17m
kubernetes 34.69.50.167:443 29h
kubectl get svc:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app NodePort 10.24.6.14 <none> 80:30382/TCP 6h25m
kubernetes ClusterIP 10.24.0.1 <none> 443/TCP 29h
In Kubernetes, the service is used to communicate with pods.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort type.
The NodePort setting applies to the Kubernetes services. By default Kubernetes services are accessible at the ClusterIP which is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service. To make the service accessible from outside of the cluster a user can create a service of type NodePort.
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.

use prometheus with external ip address

we have k8s cluster and I’ve application which is running there.
Now I try to add https://prometheus.io/
and I use the command
helm install stable/prometheus --version 6.7.4 --name my-prometheus
this command works and I got this
NAME: my-prometheus
LAST DEPLOYED: Tue Feb 5 15:21:46 2019
NAMESPACE: default
STATUS: DEPLOYED
...
when I run command
kubectl get services
I got this
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 2d4h
my-prometheus-alertmanager ClusterIP 100.75.244.55 <none> 80/TCP 8m44s
my-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 8m43s
my-prometheus-node-exporter ClusterIP None <none> 9100/TCP 8m43s
my-prometheus-pushgateway ClusterIP 100.75.24.67 <none> 9091/TCP 8m43s
my-prometheus-server ClusterIP 100.33.26.206 <none> 80/TCP 8m43s
I didnt get any externalIP
Does someone knows how to add it ? via service? any example for this
update
i’ve added the following yml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
spec:
selector:
app: prometheus-server
type: LoadBalancer
ports:
- port: 8080
targetPort: 9090
nodePort: 30001
which created successfully
now I see the external ip like when running kubectl get services
my-prometheus-server LoadBalancer 100.33.26.206 8080:30001/TCP 80/TCP 8m43s
And I use in the browser 100.33.26.206:30001 and nothing happen, any idea?
I think what you are trying to do is to create a service with a type LoadBalancer, those have an internal and external IP.
You can create one like any other service but you should precise those two fields:
externalTrafficPolicy: Local
type: LoadBalancer
Updated:
There seems to be some confusion, you don't need an external ip to monitor your apps, it will only be used to access prometheus UI.
The UI is accessible on port 9090 but prometheus is never accessed by the exporter as it will be prometheus wich will be scraping the exporters.
Now to access a service from the internet you should have a google ip, but it seems that what you have is still an internal IP, it's in the same subnet as the other clusterIP, and it should not. For now in place of an external ip it's showing a port redirect wich is also wrong as the prometheus UI is on port 9090 (if you didn't modify your configuration it should still be). You should try to remove the "nodePort" and leave the port redirect to kubernetes.
The Prometheus helm chart does support configuration for service, see the documentation
To configure Prometheus server on a local cluster, follow the steps:
Create values.yaml:
server:
service:
servicePort: 31000
type: LoadBalancer
loadBalancerIP: localhost
or
server:
service:
nodePort: 31000
type: NodePort
Add stable repo to helm (if missing):
helm repo add stable "https://kubernetes-charts.storage.googleapis.com/"
Install Prometheus:
helm install prometheus-demo stable/prometheus --values .\values.yaml
Wait for 1-2mins. Prometheus should be available: http://localhost:31000/

Kubernetes + metallb + traefik: how to get real client ip?

traefik.toml:
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[entryPoints.https.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[api]
traefik Service:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
- protocol: TCP
port: 443
name: https
type: LoadBalancer
Then:
kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4
deployment "source-ip-app" created
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
service "clusterip" exposed
kubectl get svc clusterip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.5.55.102 <none> 80/TCP 2h
Create ingress for clusterip:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: clusterip-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: clusterip.staging
http:
paths:
- backend:
serviceName: clusterip
servicePort: 80
clusterip.staging ip: 192.168.0.69
From other pc with ip: 192.168.0.100:
wget -qO - clusterip.staging
and get results:
CLIENT VALUES:
client_address=10.5.65.74
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://clusterip.staging:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
cache-control=max-age=0
host=clusterip.staging
upgrade-insecure-requests=1
x-forwarded-for=10.5.64.0
x-forwarded-host=clusterip.staging
x-forwarded-port=443
x-forwarded-proto=https
x-forwarded-server=traefik-ingress-controller-755cc56458-t8q9k
x-real-ip=10.5.64.0
BODY:
-no body in request-
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default clusterip NodePort 10.5.55.102 <none> 80:31169/TCP 19h
default kubernetes ClusterIP 10.5.0.1 <none> 443/TCP 22d
kube-system kube-dns ClusterIP 10.5.0.3 <none> 53/UDP,53/TCP 22d
kube-system kubernetes-dashboard ClusterIP 10.5.5.51 <none> 443/TCP 22d
kube-system traefik-ingress-service LoadBalancer 10.5.2.37 192.168.0.69 80:32745/TCP,443:30219/TCP 1d
kube-system traefik-web-ui NodePort 10.5.60.5 <none> 80:30487/TCP 7d
How to get real ip (192.168.0.100) in my installation? Why x-real-ip 10.5.64.0? I could not find the answers in the documentation.
When kube-proxy uses the iptables mode, it uses NAT to send data to the node where payload works, and you lose the original SourceIP address in that case.
As I understood, you use Matallb behind the Traefik Ingress Service (because its type is LoadBalancer). That means traffic from the client to the backend goes that way:
Client -> Metallb -> Traefik LB -> Traefik Service -> Backend pod.
Traefik works correctly and adds headers x-*, including x-forwarded-for and x-real-ip which contain a fake address, and that's why:
From the Metallb documentation:
MetalLB understands the service’s externalTrafficPolicy option and implements different announcements modes depending on the policy and announcement protocol you select.
Layer2
This policy results in uniform traffic distribution across all pods in the service. However, kube-proxy will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the cluster’s leader node.
BGP
“Cluster” traffic policy
With the default Cluster traffic policy, every node in your cluster will attract traffic for the service IP. On each node, the traffic is subjected to a second layer of load-balancing (provided by kube-proxy), which directs the traffic to individual pods.
......
The other downside of the “Cluster” policy is that kube-proxy will obscure the source IP address of the connection when it does its load-balancing, so your pod logs will show that external traffic appears to be coming from your cluster’s nodes.
“Local” traffic policy
With the Local traffic policy, nodes will only attract traffic if they are running one or more of the service’s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by kube-proxy, there is no “horizontal” traffic flow between nodes.
This policy provides the most efficient flow of traffic to your service. Furthermore, because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.
Finally, the only way to get the real source IP address is to use "Local" mode of TrafficPolicy.
If you set it up, you will get what you want.