I've been trying to use Traefik as an Ingress Controller on Google Cloud's container engine.
I got my http deployment/service up and running (when I exposed it with a normal LoadBalancer, it was answering fine).
I then removed the LoadBalancer, and followed this tutorial: https://docs.traefik.io/user-guide/kubernetes/
So I got a new traefik-ingress-controller deployment and service, and an ingress for traefik's ui which I can access through the kubectl proxy.
I then create my ingress for my http service, but here comes my issue: I can't find a way to expose that externally.
I want it to be accessible by anybody via an external IP.
What am I missing?
Here is the output of kubectl get --export all:
NAME READY STATUS RESTARTS AGE
po/mywebservice-3818647231-gr3z9 1/1 Running 0 23h
po/mywebservice-3818647231-rn4fw 1/1 Running 0 1h
po/traefik-ingress-controller-957212644-28dx6 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/mywebservice 10.51.254.147 <none> 80/TCP 1d
svc/kubernetes 10.51.240.1 <none> 443/TCP 1d
svc/traefik-ingress-controller 10.51.248.165 <nodes> 80:31447/TCP,8080:32481/TCP 25m
svc/traefik-web-ui 10.51.248.65 <none> 80/TCP 3h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mywebservice 2 2 2 2 1d
deploy/traefik-ingress-controller 1 1 1 1 3h
NAME DESIRED CURRENT READY AGE
rs/mywebservice-3818647231 2 2 2 23h
rs/traefik-ingress-controller-957212644 1 1 1 3h
You need to expose the Traefik service. Set the service spec type to LoadBalancer. Try the below service file that i've used previously:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
tier: proxy
ports:
- port: 80
targetPort: 80
Related
My goal is to reproduce the observations in this blog post: https://medium.com/kubernetes-tutorials/monitoring-your-kubernetes-deployments-with-prometheus-5665eda54045
So far I am able to deploy the example rpc-app applicaiton in my cluster, the following shows the two pods for this application is running:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default rpc-app-deployment-64f456b65-5m7j5 1/1 Running 0 3h23m 10.244.0.15 my-server-ip.company.com <none> <none>
default rpc-app-deployment-64f456b65-9mnfd 1/1 Running 0 3h23m 10.244.0.14 my-server-ip.company.com <none> <none>
The application exposes metrics and is confirmed by:
root#xxxxx:/u01/app/k8s # curl 10.244.0.14:8081/metrics
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
...
rpc_durations_seconds{service="uniform",quantile="0.5"} 0.0001021102787270781
rpc_durations_seconds{service="uniform",quantile="0.9"} 0.00018233200374804932
rpc_durations_seconds{service="uniform",quantile="0.99"} 0.00019828258205623097
rpc_durations_seconds_sum{service="uniform"} 6.817882693745326
rpc_durations_seconds_count{service="uniform"} 68279
My prometheus pod is running in the same cluster. However I am unable to see any rpc_* meterics in the prometheus.
monitoring prometheus-deployment-599bbd9457-pslwf 1/1 Running 0 30m 10.244.0.21 my-server-ip.company.com <none> <none>
In the promethus GUI
click Status -> Servcie Discovery, I got
Service Discovery
rpc-metrics (0 / 3 active targets)
click Status -> Targets show nothing (0 targets)
click Status -> Configuration
The content can be seen as: https://gist.github.com/denissun/14835468be3dbef7bc924032767b9d7f
I am really new to Prometheus/Kubernetes monitoring, appreciate your help to troubleshoot this issue.
update 1 - I created the service
`
# cat rpc-app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: rpc-app-service
labels:
app: rpc-app
spec:
ports:
- name: web
port: 8081
targetPort: 8081
protocol: TCP
nodePort: 32325
selector:
app: rpc-app
type: NodePort
# kubectl get service rpc-app-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rpc-app-service NodePort 10.110.204.119 <none> 8081:32325/TCP 9h
Did you create the Kubernetes Service to expose the Deployment?
kubectl create -f rpc-app-service.yam
The Prometheus configuration watches for Service endpoints not Deployments|Pods.
Have a look at the Prometheus Operator. It's slightly more involved than running a Prometheus Deployment in your cluster but it represents a state-of-the-art deployment of Prometheus with some elegant abstractions such as PodMonitors and ServiceMonitors.
I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.
First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.
By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP
It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s
I am new to configuring Ingress rules for my Kubernetes cluster.
My Kubernetes cluster is deployed on Bare Metal. No cloud.
I followed this link to set up my nginx-controller with RBAC in my cluster.
This is what I have deployed :
# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/default-http-backend-7c5bc89cc9-ks6kd 1/1 Running 0 2h
pod/nginx-ingress-controller-5b6864749-8xbhf 1/1 Running 0 2h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.233.15.56 <none> 80/TCP 2h
service/ingress-nginx NodePort 10.233.38.84 <none> 80:31118/TCP,443:32003/TCP 2h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 2h
deployment.apps/nginx-ingress-controller 1 1 1 1 2h
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-7c5bc89cc9 1 1 1 2h
replicaset.apps/nginx-ingress-controller-5b6864749 1 1 1 2h
Given that I have my setup, I want to access my grafana dashboard using a URL.
My grafana setup is working perfectly fine.
# kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/grafana-67c6585fbd-4jl7p 1/1 Running 0 2h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana NodePort 10.233.5.111 <none> 3000:32093/TCP 2h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1 1 1 1 2h
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-67c6585fbd 1 1 1 2h
I can access the dashboard using http://10.27.239.145:32093 which is the IP of one of my K8S worker nodes.
Now rather than accessing via IP:NodePort, I want to access via URL e.g. grafana.test.mydomain.com
So the ingress rule that I configured in my default namespace is :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: 2018-09-25T20:32:24Z
generation: 5
name: grafana
namespace: default
resourceVersion: "28485"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/jenkins-tls
uid: 1c51cece-c102-11e8-bf0f-02000a1bef39
spec:
rules:
- host: grafana.test.mydomain.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
path: /
On my local laptop from where I am testing, Ive added to my /etc/hosts the following entry :
10.27.239.145 grafana.test.mydomain.com
And in my browser, I am trying to access http://grafana.test.mydomain.com but I only get This site can’t be reached
grafana.test.mydomain.com refused to connect.
I have a strong feeling that I am missing out on something but can't figure it out.
I changed the NodePort to ClusterIP but no luck.
I know that my ingress controller is working since everytime I make a change to my ingress rules, I get logs from my ingress controller.
I0925 21:00:19.041440 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"grafana", UID:"1c51cece-c102-11e8-bf0f-02000a1bef39", APIVersion:"extensions/v1beta1", ResourceVersion:"28485", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/grafana
I0925 21:00:19.041732 9 controller.go:171] Configuration changes detected, backend reload required.
I0925 21:00:19.216044 9 controller.go:187] Backend successfully reloaded.
I0925 21:00:19.217645 9 controller.go:204] Dynamic reconfiguration succeeded.
Any help will strongly be appreciated regarding what might I have missed.
From what I see, you need to set grafana.test.mydomain.com to point to 10.233.38.84.
Basically, your nginx controller service is directing the traffic to your ingress and then your ingress forwards it to the backend on the nodePort (this is implicit in the ingress). It works for me, but I'm using an AWS ELB, I basically set grafana.test.mydomain.com to point to aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com
$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/default-http-backend-6586bc58b6-snxbv 1/1 Running 0 1h
pod/grafana-5b969bb7f9-tsv5k 1/1 Running 0 52m
pod/nginx-ingress-controller-6bd7c597cb-lfwcf 1/1 Running 0 1h
pod/prometheus-server-5dbf9f4fc9-mnwn4 1/1 Running 0 53m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.x.x.x <none> 80/TCP 1h
service/grafana NodePort 10.x.x.x <none> 3000:30073/TCP 52m
service/ingress-nginx LoadBalancer 10.x.x.x aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com 80:30276/TCP,443:32011/TCP 1h
service/prometheus-server NodePort 10.x.x.x <none> 9090:32419/TCP 53m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 1h
deployment.apps/grafana 1 1 1 1 52m
deployment.apps/nginx-ingress-controller 1 1 1 1 1h
deployment.apps/prometheus-server 1 1 1 1 53m
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-6586bc58b6 1 1 1 1h
replicaset.apps/grafana-5b969bb7f9 1 1 1 52m
replicaset.apps/nginx-ingress-controller-6bd7c597cb 1 1 1 1h
replicaset.apps/prometheus-server-5dbf9f4fc9 1 1 1 53m
$ kubectl describe ingress grafana-ingress -n ingress-nginx
Name: grafana-ingress
Namespace: ingress-nginx
Address: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
grafana.test.mydomain.com
/ grafana:3000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"grafana-ingress","namespace":"ingress-nginx"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"grafana","servicePort":3000},"path":"/"}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 40m nginx-ingress-controller Ingress ingress-nginx/grafana-ingress
Normal UPDATE 22m (x2 over 40m) nginx-ingress-controller Ingress ingress-nginx/grafana-ingress
As far as I can see, you only have a NodePort Service on port 32093.
Your NodePort publishes the port 3000 to 32093 to any external node address as you have already proven, but you configured Ingress to contact port 3000 on grafana service.
Either add the targetPort, port and nodePort to the service for your Grafana instance and point targetPort and port to 3000 and leave nodePort empty/set it to 32092. Then the ingress should work as you posted. Snippet:
nodePort: 32093
port: 3000
protocol: TCP
targetPort: 3000
Or try to set servicePort: 3000 in your ingress configuration to 32093. Warning: I never tested this. I do not know if Ingress supports that. According to the documentation it should as NodePort is a superset of ClusterIP:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :.
Edit
Btw: http://grafana.test.mydomain.com:32093 should then already work with your configuration (NodePort)
I have deployed kubernetes and the dashboard onto a compute instance in Oracle cloud.
I have the dashboard installed with grafana onto my compute instance.
NAME READY STATUS RESTARTS AGE
po/etcd-mst-instance1 1/1 Running 0 1h
po/heapster-7856f6b566-rkfx5 1/1 Running 0 1h
po/kube-apiserver-mst-instance1 1/1 Running 0 1h
po/kube-controller-manager-mst-instance1 1/1 Running 0 1h
po/kube-dns-d879d6bcb-b9zjf 3/3 Running 0 1h
po/kube-flannel-ds-lgklw 1/1 Running 0 1h
po/kube-proxy-g6vxm 1/1 Running 0 1h
po/kube-scheduler-mst-instance1 1/1 Running 0 1h
po/kubernetes-dashboard-dd5c889c-6vphq 1/1 Running 0 1h
po/monitoring-grafana-5d4d76cd65-p7n5l 1/1 Running 0 1h
po/monitoring-influxdb-787479f6fd-8qkg2 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/heapster ClusterIP 10.98.200.184 <none> 80/TCP 1h
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
svc/kubernetes-dashboard ClusterIP 10.107.155.3 <none> 443/TCP 1h
svc/monitoring-grafana ClusterIP 10.96.130.226 <none> 80/TCP 1h
svc/monitoring-influxdb ClusterIP 10.105.163.213 <none> 8086/TCP 1h
I am trying to access the dashboard via SSH and did the below in my local computer:
ssh -L localhost:8001:172.31.4.117:6443 opc#xxxxxxxx
However, it tells me this error :
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Im not sure what is the best way to access the dashboard. I am new at k8s and still at a beginner stage so would want to consult as I have also tried doing kubectl proxy on my local computer but when i try to access 127.0.0.1 it gives me this error:
I0804 17:01:28.902675 77193 logs.go:41] http: proxy error: dial tcp [::1]:8080: connect: connection refused
Would really appreciaate any help and thank you
Kubernetes includes a web dashboard that can be used for basic management operations.
Once Dashboard is installed on your Kubernetes cluster, it can be accessed in a few different ways.
I prefer to use the kubectl proxy from the command line to access Kubernetes Dashboard.
Kubectl does for you: authentication with API server and forward traffic between
your cluster (with Dashboard deployed inside) and your web browser.
Please notice that kubectl does it for a local running web browser, as it is running on
a localhost.
From the command line:
kubectl proxy
Next, start browsing this address:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
In case Kubernetes API server is exposed and accessible, you may try:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
where master-ip is the IP address of your Kubernetes master node where API is running.
On single node setup, another way is use NodePort configuration to access Dashboard.
I found it on dashboard wiki:
Here is a sample of configuration to consider and adapt to your needs:
apiVersion: v1
...
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "343478"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head
spec:
clusterIP: <your-cluster-ip>
externalTrafficPolicy: Cluster
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
After applying configuration, check for the exposed port for https using the command:
kubectl -n kube-system get service kubernetes-dashboard
If it returned for example 31707, you could start your browser with:
https://<master-ip>:31707
I was inspired by web ui dashboard guide and accessing dashboard wiki.
I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.