use prometheus with external ip address - kubernetes

we have k8s cluster and I’ve application which is running there.
Now I try to add https://prometheus.io/
and I use the command
helm install stable/prometheus --version 6.7.4 --name my-prometheus
this command works and I got this
NAME: my-prometheus
LAST DEPLOYED: Tue Feb 5 15:21:46 2019
NAMESPACE: default
STATUS: DEPLOYED
...
when I run command
kubectl get services
I got this
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 2d4h
my-prometheus-alertmanager ClusterIP 100.75.244.55 <none> 80/TCP 8m44s
my-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 8m43s
my-prometheus-node-exporter ClusterIP None <none> 9100/TCP 8m43s
my-prometheus-pushgateway ClusterIP 100.75.24.67 <none> 9091/TCP 8m43s
my-prometheus-server ClusterIP 100.33.26.206 <none> 80/TCP 8m43s
I didnt get any externalIP
Does someone knows how to add it ? via service? any example for this
update
i’ve added the following yml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
spec:
selector:
app: prometheus-server
type: LoadBalancer
ports:
- port: 8080
targetPort: 9090
nodePort: 30001
which created successfully
now I see the external ip like when running kubectl get services
my-prometheus-server LoadBalancer 100.33.26.206 8080:30001/TCP 80/TCP 8m43s
And I use in the browser 100.33.26.206:30001 and nothing happen, any idea?

I think what you are trying to do is to create a service with a type LoadBalancer, those have an internal and external IP.
You can create one like any other service but you should precise those two fields:
externalTrafficPolicy: Local
type: LoadBalancer
Updated:
There seems to be some confusion, you don't need an external ip to monitor your apps, it will only be used to access prometheus UI.
The UI is accessible on port 9090 but prometheus is never accessed by the exporter as it will be prometheus wich will be scraping the exporters.
Now to access a service from the internet you should have a google ip, but it seems that what you have is still an internal IP, it's in the same subnet as the other clusterIP, and it should not. For now in place of an external ip it's showing a port redirect wich is also wrong as the prometheus UI is on port 9090 (if you didn't modify your configuration it should still be). You should try to remove the "nodePort" and leave the port redirect to kubernetes.

The Prometheus helm chart does support configuration for service, see the documentation
To configure Prometheus server on a local cluster, follow the steps:
Create values.yaml:
server:
service:
servicePort: 31000
type: LoadBalancer
loadBalancerIP: localhost
or
server:
service:
nodePort: 31000
type: NodePort
Add stable repo to helm (if missing):
helm repo add stable "https://kubernetes-charts.storage.googleapis.com/"
Install Prometheus:
helm install prometheus-demo stable/prometheus --values .\values.yaml
Wait for 1-2mins. Prometheus should be available: http://localhost:31000/

Related

Kubernetes service responding on different port than assigned port

I've deployed few services and found one service to be behaving differently to others. I configured it to listen on 8090 port (which maps to 8443 internally), but the request works only if I send on port 8080. Here's my yaml file for the service (stripped down to essentials) and there is a deployment which encapsulates the service and container
apiVersion: v1
kind: Service
metadata:
name: uisvc
namespace: default
labels:
helm.sh/chart: foo-1
app.kubernetes.io/name: foo
app.kubernetes.io/instance: rb-foo
spec:
clusterIP: None
ports:
- name: http
port: 8090
targetPort: 8080
selector:
app.kubernetes.io/component: uisvc
After installing the helm, when I run kubectl get svc, I get the following output
fooaccess ClusterIP None <none> 8888/TCP 119m
fooset ClusterIP None <none> 8080/TCP 119m
foobus ClusterIP None <none> 6379/TCP 119m
uisvc ClusterIP None <none> 8090/TCP 119m
However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. The container is running a spring boot application which by default listens on 8080. The only explanation I could come up with is somehow the port/targetPort is being ignored in this config and other pods are directly reaching the spring service inside.
Is this behaviour correct? Why is it not listening on 8090? How should I make it work this way?
Edit: Output for kubectl describe svc uisvc
Name: uisvc
Namespace: default
Labels: app.kubernetes.io/instance=foo-rba
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rba
helm.sh/chart=rba-1
Annotations: meta.helm.sh/release-name: foo
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/component=uisvc
Type: ClusterIP
IP: None
Port: http 8090/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.8:8080
Session Affinity: None
Events: <none>
This is expected behavior since you used headless service.
Headless Services are used for service discovery mechanism so instead of returning single DNS A records, the DNS server will return multiple A records for your service each pointing to the IP of an individual pods that backs the service. So you do simple DNS A records lookup and get the IP of all of the pods that are part of the service.
Since headless service doesn't create iptables rules but creates dns records instead, you can interact directly with your pod instead of a proxy. So If you resolve <servicename:port> you will get <podN_IP:port> and then your connection will go to the pod directly. As long as all of this is in the same namespace you don`t have resolve it by full dns name.
With several pods, DNS will give you all of them and just put in the random order (or in RR order). The order depends on the DNS server implementation and settings.
For more reading please visit:
Services-netowrking/headless-services
This stack questions with great answer explaining how headless services work

Kubernetes - Curl a Cluster-IP Service

I'm following this kubernetes tutorial to create a service https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service
I'm using minikube on my local environment. Everything works fine but I cannot curl my cluster IP. I have an operation timeout:
curl: (7) Failed to connect to 10.105.7.117 port 80: Operation timed out
My kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d17h
my-nginx ClusterIP 10.105.7.117 <none> 80/TCP 42h
It seems that I'm having the same issue that this guys here who did not find any answer to his problem: https://github.com/kubernetes/kubernetes/issues/86471
I have tried to do the same on my gcloud console but I have the same result. I can only curl my external IP service.
If I understood well, I'm suppose to be already in my minikube local cluster when I start minikube, so for me I should be able to curl the service like it is mention in the tutorial.
What I'm doing wrong?
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster. That is why you cannot access your service via ClusterIP from outside the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example
spec:
type: NodePort
selector:
app: example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
Then execute command:
$ kubectl get svc --namespace=example
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ui NodePort yy.zz.xx.xx <none> 8080:30960/TCP 1d
Get minikube ip to get the nodeIP
$ minikube ip
aa.bb.cc.dd
then you can curl it:
curl http://aa.bb.cc.dd:8080
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
kind: Service
apiVersion: v1
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: LoadBalancer
externalIPs:
- <your minikube ip>
then you can curl it:
$ curl http://yourminikubeip:8080/
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns. The service itself is only exposed within the cluster, however, the FQDN external-name is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path.
See more: esposing-services-kubernetes.
ClusterIP is only available inside the kubernetes network.
If you want to be able to hit this from outside of the cluster use a LoadBalancer to expose a public IP that you can then access from outside of the cluster
Or..
kubectl port-forward <pod_name> 8080:80
then curl
curl http://localhost:8080
which will route through the port-forward to port 80 of the pod.

Can't get traefik on kubernetes to working with external IP in a home lab

I have a 3 node Kubernetes cluster running at home. I deployed traefik with helm, however, it never gets an external IP. Since this is in the private IP address space, shouldn't I expect the external IP to be something in the same address space? Am I missing something critical here?
$ kubectl describe svc traefik --namespace kube-system
Name: traefik
Namespace: kube-system
Labels: app=traefik
chart=traefik-1.64.0
heritage=Tiller
release=traefik
Annotations: <none>
Selector: app=traefik,release=traefik
Type: NodePort
IP: 10.233.62.160
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31111/TCP
Endpoints: 10.233.86.47:80
Port: https 443/TCP
TargetPort: httpn/TCP
NodePort: https 30690/TCP
Endpoints: 10.233.86.47:8880
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl get svc traefik --namespace kube-system -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik NodePort 10.233.62.160 <none> 80:31111/TCP,443:30690/TCP 133m
Use MetalLB, to get an LB IP. More here on their site.
external IP works in external cloud provider platform like Google CLoud Platform.
In your case, you can access traefik service at the below url
node-host:nodeport
http://<hostname-of-worker-node>:31111
As seen in the outputs, type of your service is NodePort. With this type no external ip is exposed. Here is the definition from official documentatin:
If you set the type field to NodePort, the Kubernetes master will
allocate a port from a range specified by --service-node-port-range
flag (default: 30000-32767), and each Node will proxy that port (the
same port number on every Node) into your Service. That port will be
reported in your Service’s .spec.ports[*].nodePort field.
If you want to reach your service from external you have to use ip address of your computer and the port that Kubernetes exposed like this:
http://IP_OF_YOUR_COMPUTER:31111
You can read this page for details.

Configuring Istio, Kubernetes and MetalLB to use a Istio LoadBalancer

I’m struggling with the last step of a configuration using MetalLB, Kubernetes, Istio on a bare-metal instance, and that is to have a web page returned from a service to the outside world via an Istio VirtualService route. I’ve just updated the instance to
MetalLB (version 0.7.3)
Kubernetes (version 1.12.2)
Istio (version 1.0.3)
I’ll start with what does work.
All complementary services have been deployed and most are working:
Kubernetes Dashboard on http://localhost:8001
Prometheus Dashboard on http://localhost:10010 (I had something else on 9009)
Envoy Admin on http://localhost:15000
Grafana (Istio Dashboard) on http://localhost:3000
Jaeger on http://localhost:16686
I say most because since the upgrade to Istio 1.0.3 I've lost the telemetry from istio-ingressgateway in the Jaeger dashboard and I'm not sure how to bring it back. I've dropped the pod and re-created to no-avail.
Outside of that, MetalLB and K8S appear to be working fine and the load-balancer is configured correctly (using ARP).
kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.109.247.149 <none> 3000/TCP 9d
istio-citadel ClusterIP 10.110.129.92 <none> 8060/TCP,9093/TCP 28d
istio-egressgateway ClusterIP 10.99.39.29 <none> 80/TCP,443/TCP 28d
istio-galley ClusterIP 10.98.219.217 <none> 443/TCP,9093/TCP 28d
istio-ingressgateway LoadBalancer 10.108.175.231 192.168.1.191 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30805/TCP,8060:32514/TCP,853:30601/TCP,15030:31159/TCP,15031:31838/TCP 28d
istio-pilot ClusterIP 10.97.248.195 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 28d
istio-policy ClusterIP 10.98.133.209 <none> 9091/TCP,15004/TCP,9093/TCP 28d
istio-sidecar-injector ClusterIP 10.102.158.147 <none> 443/TCP 28d
istio-telemetry ClusterIP 10.103.141.244 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 28d
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP,5778/TCP 27h
jaeger-collector ClusterIP 10.104.66.65 <none> 14267/TCP,14268/TCP,9411/TCP 27h
jaeger-query LoadBalancer 10.97.70.76 192.168.1.193 80:30516/TCP 27h
prometheus ClusterIP 10.105.176.245 <none> 9090/TCP 28d
zipkin ClusterIP None <none> 9411/TCP 27h
I can expose my deployment using:
kubectl expose deployment enrich-dev --type=LoadBalancer --name=enrich-expose
it all works perfectly fine and I can hit the webpage from the external load balanced IP address (I deleted the exposed service after this).
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
enrich-expose LoadBalancer 10.108.43.157 192.168.1.192 31380:30170/TCP 73s
enrich-service ClusterIP 10.98.163.217 <none> 80/TCP 57m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36d
If I create a K8S Service in the default namespace (I've tried multiple)
apiVersion: v1
kind: Service
metadata:
name: enrich-service
labels:
run: enrich-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: enrich
followed by a gateway and a route (VirtualService), the only response I get is a 404 outside of the mesh. You'll see in the gateway I'm using the reserved word mesh but I've tried both that and naming the specific gateway. I've also tried different match prefixes for specific URI and the port you can see below.
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: enrich-dev-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: enrich-virtualservice
spec:
hosts:
- "enrich-service.default"
gateways:
- mesh
http:
- match:
- port: 80
route:
- destination:
host: enrich-service.default
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: enrich-destination
spec:
host: enrich-service.default
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: v1
labels:
app: enrich
I've double checked it's not the DNS playing up because I can go into the shell of the ingress-gateway either via busybox or using the K8S dashboard
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/shell/istio-system/istio-ingressgateway-6bbdd58f8c-glzvx/?namespace=istio-system
and do both an
nslookup enrich-service.default
and
curl -f http://enrich-service.default/
and both work successfully, so I know the ingress-gateway pod can see those. The sidecars are set for auto-injection in both the default namespace and the istio-system namespace.
The logs for the ingress-gateway show the 404:
[2018-11-01T03:07:54.351Z] "GET /metadataHTTP/1.1" 404 - 0 0 1 - "192.168.1.90" "curl/7.58.0" "6c1796be-0791-4a07-ac0a-5fb07bc3818c" "enrich-service.default" "-" - - 192.168.224.168:80 192.168.1.90:43500
[2018-11-01T03:26:39.339Z] "GET /HTTP/1.1" 404 - 0 0 1 - "192.168.1.90" "curl/7.58.0" "ed956af4-77b0-46e6-bd26-c153e29837d7" "enrich-service.default" "-" - - 192.168.224.168:80 192.168.1.90:53960
192.168.224.168:80 is the IP address of the gateway.
192.168.1.90:53960 is the IP address of my external client.
Any suggestions, I've tried hitting this from multiple angles for a couple of days now and I feel I'm just missing something simple. Suggested logs to look at perhaps?
Just to close this question out for the solution to the problem in my instance. The mistake in configuration started all the way back in the Kubernetes cluster initialisation. I had applied:
kubeadm init --pod-network-cidr=n.n.n.n/n --apiserver-advertise-address 0.0.0.0
the pod-network-cidr using the same address range as the local LAN on which the Kubernetes installation was deployed i.e. the desktop for the Ubuntu host used the same IP subnet as what I'd assigned the container network.
For the most part, everything operated fine as detailed above, until the Istio proxy was trying to route packets from an external load-balancer IP address to an internal IP address which happened to be on the same subnet. Project Calico with Kubernetes seemed to be able to cope with it as that's effectively Layer 3/4 policy but Istio had a problem with it a L7 (even though it was sitting on Calico underneath).
The solution was to tear down my entire Kubernetes deployment. I was paranoid and went so far as to uninstall Kubernetes and deploy again and redeploy with a pod network in the 172 range which wasn't anything to do with my local lan. I also made the same changes in the Project Calico configuration file to match pod networks. After that change, everything worked as expected.
I suspect that in a more public configuration where your cluster was directly attached to a BGP router as opposed to using MetalLB with an L2 configuration as a subset of your LAN wouldn't exhibit this issue either. I've documented it more in this post:
Microservices: .Net, Linux, Kubernetes and Istio make a powerful combination

Access to service IP from the pod

I have a pod with mysql and service to provide access from outside. So I can connect to my database at 192.168.1.29:3306 from the other machine.
But how I can connect from the other pod in the same cluster (same node)?
That is my service description:
Name: etl-mysql
Namespace: default
Labels: run=etl-mysql
Annotations: field.cattle.io/publicEndpoints=[{"addresses":["192.168.1.20"],"port":31211,"protocol":"TCP","serviceName":"default:etl-mysql","allNodes":true}]
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"etl-mysql"},"name":"etl-mysql","namespace":"default"},"spec":{"extern...
Selector: run=etl-mysql
Type: NodePort
IP: 10.43.44.58
External IPs: 192.168.1.29
Port: etl-mysql-port 3306/TCP
TargetPort: 3306/TCP
NodePort: etl-mysql-port 31211/TCP
Endpoints: 10.42.1.87:3306
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Kubernetes has a built in DNS, that registers services automatically resulting in simple to use DNS address like this: http://{servicename}.{namespace}:{servicePort}
If you are in the same namespace you can omit the namespace part and if your service listens on port 80 that part can be omitted as well.
If you need further informations the following documentation will help you: DNS for Services and Pods