I am trying to do Consul setup via Kubernetes, helm chart, https://www.consul.io/docs/k8s/helm
Based on my pre-Kubernetes knowledge: services, using Consul access via Consul Agent, running on each host and listening on hosts IP
Now, I deployed via Helm chart to Kubernetes cluster. First misunderstanding the terminology, Consul Agent vs Client in this setup? I presume it is the same
Now, set up:
Helm chart config (Terraform fragment), nothing specific to Clients/Agent's and their service:
global:
name: "consul"
datacenter: "${var.consul_config.datacenter}"
server:
storage: "${var.consul_config.storage}"
connect: false
syncCatalog:
enabled: true
default: true
k8sAllowNamespaces: ['*']
k8sDenyNamespaces: [${join(",", var.consul_config.k8sDenyNamespaces)}]
Pods, client/agent ones are DaemonSet, not in host network mode
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-server-0 1/1 Running 0 11h
consul-server-1 1/1 Running 0 11h
consul-server-2 1/1 Running 0 11h
consul-sync-catalog-8b688ff9b-klqrv 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
Services
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.consul <none> 11h
consul-dns ClusterIP 172.20.124.238 <none> 53/TCP,53/UDP 11h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 11h
consul-ui ClusterIP 172.20.131.29 <none> 80/TCP 11h
Question 1 Where is a service, to target Client (Agent) pods, but not Server's pods ? Did I miss it in helm chart?
My plan is, while I am not going to use Host (Kubernetes node) networking:
Find the Client/Agent service or make my own. So, it will be used by the Consul's user's. E.g., this service address I will specify for Consul template init pod of the Consul template. In the config consuming application
kubectl get pods --selector app=consul,component=client,release=consul
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
Optional: will add a topologyKeys in to agent service, so each consumer will not cross host boundary
Question 2 Is it right approach? Or it is different for Consul Kubernetes deployments
You can use the Kubernetes downward API to inject the IP of host as an environment variable for your pod.
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: 'consul:latest'
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- '/bin/sh'
- '-ec'
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never
See https://www.consul.io/docs/k8s/installation/install#accessing-the-consul-http-api for more info.
Related
T learn kubernetes I've built myself a bare metal cluster using 4 Raspberry PIs set it up using k3s:
# curl -sfL https://get.k3s.io | sh -
Added nodes etc., and everything comes up and I can see all the nodes and almost everything is working as expected.
I wanted to monitor the PIs so I installed the kube-prometheus-stack with helm:
$ kubectl create namespace monitoring
$ helm install prometheus --namespace monitoring prometheus-community/kube-prometheus-stack
And now everything looks fantastic:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-crd-s8zw5 0/1 Completed 0 5d21h
kube-system helm-install-traefik-rc9f2 0/1 Completed 1 5d21h
monitoring prometheus-prometheus-node-exporter-j85rw 1/1 Running 10 28h
kube-system metrics-server-86cbb8457f-mvbkl 1/1 Running 12 5d21h
kube-system coredns-7448499f4d-t7sp8 1/1 Running 13 5d21h
monitoring prometheus-prometheus-node-exporter-mmh2q 1/1 Running 9 28h
monitoring prometheus-prometheus-node-exporter-j4k4c 1/1 Running 10 28h
monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 10 28h
kube-system svclb-traefik-zkqd6 2/2 Running 6 19h
monitoring prometheus-prometheus-node-exporter-bft5t 1/1 Running 10 28h
kube-system local-path-provisioner-5ff76fc89d-g8tm6 1/1 Running 12 5d21h
kube-system svclb-traefik-jcxd2 2/2 Running 28 5d21h
kube-system svclb-traefik-mpbjm 2/2 Running 22 5d21h
kube-system svclb-traefik-7kxtw 2/2 Running 20 5d21h
monitoring prometheus-grafana-864598fd54-9548l 2/2 Running 10 28h
kube-system traefik-65969d48c7-9lh9m 1/1 Running 3 19h
monitoring prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 10 28h
monitoring prometheus-kube-state-metrics-76f66976cb-m8k2h 1/1 Running 6 28h
monitoring prometheus-kube-prometheus-operator-5c758db547-zsv4s 1/1 Running 6 28h
The services are all there:
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d21h
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d21h
kube-system metrics-server ClusterIP 10.43.80.65 <none> 443/TCP 5d21h
kube-system prometheus-kube-prometheus-kube-proxy ClusterIP None <none> 10249/TCP 28h
kube-system prometheus-kube-prometheus-kube-scheduler ClusterIP None <none> 10251/TCP 28h
monitoring prometheus-kube-prometheus-operator ClusterIP 10.43.180.73 <none> 443/TCP 28h
kube-system prometheus-kube-prometheus-coredns ClusterIP None <none> 9153/TCP 28h
kube-system prometheus-kube-prometheus-kube-etcd ClusterIP None <none> 2379/TCP 28h
kube-system prometheus-kube-prometheus-kube-controller-manager ClusterIP None <none> 10252/TCP 28h
monitoring prometheus-kube-prometheus-alertmanager ClusterIP 10.43.195.99 <none> 9093/TCP 28h
monitoring prometheus-prometheus-node-exporter ClusterIP 10.43.171.218 <none> 9100/TCP 28h
monitoring prometheus-grafana ClusterIP 10.43.20.165 <none> 80/TCP 28h
monitoring prometheus-kube-prometheus-prometheus ClusterIP 10.43.207.29 <none> 9090/TCP 28h
monitoring prometheus-kube-state-metrics ClusterIP 10.43.229.14 <none> 8080/TCP 28h
kube-system prometheus-kube-prometheus-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 28h
monitoring alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 28h
monitoring prometheus-operated ClusterIP None <none> 9090/TCP 28h
kube-system traefik LoadBalancer 10.43.20.17 192.168.76.200,192.168.76.201,192.168.76.202,192.168.76.203 80:31131/TCP,443:31562/TCP 5d21h
Namespaces:
$ kubectl get namespaces
NAME STATUS AGE
kube-system Active 5d21h
default Active 5d21h
kube-public Active 5d21h
kube-node-lease Active 5d21h
monitoring Active 28h
But I couldn't reach the grafana service.
Fair enough I thought, let's define an Ingress but it didn't work:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: prometheus-grafana
port:
number: 80
I have no idea why it isn't getting to the service and I can't really see where the problem is, although I understand containers, etc. (I first had everything running on docker swarm), I don't really know where, if anywhere, it would be shown in the logs.
I've spent the past couple of days trying all sorts of things and I finally found a hint about name spaces and problems calling services and something called "type: ExternalName".
I checked with curl from a pod inside the cluster and it is delivering the data inside of the "monitoring" name space but traefik can't get there or maybe even see it?
Having looked at the Traefik documentation I found this regarding namespaces but I have no idea where I would start to find the mentioned:
providers:
kubernetesCRD:
namespaces:
I'm assuming that k3s has set this up correctly as an empty array because I can't find anything on their site that tells me what to do with their combination of "klipper-lb" and "traefik".
I finally tried to define another service with an external name:
---
apiVersion: v1
kind: Service
metadata:
name: grafana-named
namespace: kube-system
spec:
type: ExternalName
externalName: prometheus-grafana.monitoring.svc.cluster.local
ports:
- name: service
protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: grafana-named
port:
number: 80
After 2-3 days, I've tried everything I can think of, google everything under the sun and I can't get to grafana from outside of the internal cluster nodes.
I am at a loss as to how I can make anything work with k3s. I installed Lens on my main PC and can see almost everything there, but I think that the missing metrics information requires an Ingress or something like that too.
What do I have to do to get traefik to do what I think is basically it's job, route incoming requests to the backend services?
I filed a bug report on github and one of the people there (thanks again brandond) pointed me in the right direction.
The network layer uses flannel to process the "in cluster" networking. The default implementation for that is something called "vxlan" and that is seemingly more complex with virtual ethernet adapters.
For my requirements (read: getting the cluster to even work), the solution was to change the implementation to "host-gw".
This is done by adding "--flannel-backend=host-gw" to the k3s.service option on the controller.
$ sudo systemctl edit k3s.service
### Editing /etc/systemd/system/k3s.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file
[Service]
ExecStart=
ExecStart=/usr/local/bin/k3s \
server \
'--flannel-backend=host-gw'
### Lines below this comment will be discarded
The first "ExecStart=" clears the existing default start command to enable it to be replaced by the 2nd one.
Now everything is working as I expected, and I can finally move forward with learning K8s.
I'll probably reactivate "vxlan" at some point and figure that out too.
I have successfully deployed
prometheus via helm chart kube-prometheus-stack (https://prometheus-community.github.io/helm-charts)
prometheus-adapter via helm chart prometheus-adapter (https://prometheus-community.github.io/helm-charts)
using default configuration with slight customization.
I can access prometheus, grafana and alertmanager, query metrics and see fancy charts.
But prometheus-adapter keeps complaining on startup that it can't access/discover metrics:
I0326 08:16:52.266095 1 adapter.go:98] successfully using in-cluster auth
I0326 08:16:52.330094 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/run/serving-cert/tls.crt::/var/run/serving-cert/tls.key"
E0326 08:16:52.334710 1 provider.go:227] unable to update list of all metrics: unable to fetch metrics for query "{namespace!=\"\",__name__!~\"^container_.*\"}": bad_response: unknown response code 404
I've tried various prometheus URLs in the prometheus-adapter Deployment command line argument but the problem is more or less the same.
E.g. some of the URLs I've tried are
--prometheus-url=http://prometheus-operated.prom.svc:9090
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090
There are the following services / pods running:
$ kubectl -n prom get pods
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 16h
prometheus-adapter-76fcc79b7b-7xvrm 1/1 Running 0 10m
prometheus-grafana-559b79b564-bh85n 2/2 Running 0 16h
prometheus-kube-prometheus-operator-8556f58759-kl84l 1/1 Running 0 16h
prometheus-kube-state-metrics-6bfcd6f648-ms459 1/1 Running 0 16h
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 16h
prometheus-prometheus-node-exporter-2x6mt 1/1 Running 0 16h
prometheus-prometheus-node-exporter-bns9n 1/1 Running 0 16h
prometheus-prometheus-node-exporter-sbcjb 1/1 Running 0 16h
$ kubectl -n prom get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 16h
prometheus-adapter ClusterIP 10.0.144.45 <none> 443/TCP 16h
prometheus-grafana ClusterIP 10.0.94.160 <none> 80/TCP 16h
prometheus-kube-prometheus-alertmanager ClusterIP 10.0.0.135 <none> 9093/TCP 16h
prometheus-kube-prometheus-operator ClusterIP 10.0.170.205 <none> 443/TCP 16h
prometheus-kube-prometheus-prometheus ClusterIP 10.0.250.223 <none> 9090/TCP 16h
prometheus-kube-state-metrics ClusterIP 10.0.135.215 <none> 8080/TCP 16h
prometheus-operated ClusterIP None <none> 9090/TCP 16h
prometheus-prometheus-node-exporter ClusterIP 10.0.70.247 <none> 9100/TCP 16h
kubectl -n kube-system get deployment/metrics-server
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 15d
Prometheus-adapter helm chart gets deployed using the following values:
prometheus:
url: http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local
certManager:
enabled: true
What is the correct value for --prometheus-url for prometheus-adapter in my setup ?
The problem is related to the additional path used to expose prometheus via Ingress.
I was using additional path prefix /monitoring/prometheus/ for my ingress configuration.
The solution is to tell prometheus-adapter too, that prometheus is accessible incl. this path prefix.
Thus the following makes prometheus-adapter happy :
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090/monitoring/prometheus/
And now I can see custom metrics when executing
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Thank you "rock'n rolla" for giving some hints !
I'm using both helm charts (kube-prometheus-stack and prometheus-adapter).
additional path prefix that works for me is "/", but, prometheus url must be with the name of your helm-install parameter for stack ("helm install "). I'm using "prostack" as stack name. So finally, it works for me:
helm install <adapter-name> -n <namespace> --set prometheus.url=http://prostack-kube-prometheus-s-prometheus.monitoring.svc.cluster.local --set prometheus.port=9090 --set prometheus.path=/
I struggled with the same issue.
When i installed my prometheus server with Helm via the community chart.
I got a message like this:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
my-tag-prometheus-server.carbon.svc.cluster.local
Please note that is says the service is accessible on port 80 and not 9090 which i had not noticed.
So in my values.yaml file for the prometheus-adapter chart i specified port 80 and it worked (instead of the standard 9090)
28 # Url to access prometheus
29 prometheus:
30 # Value is templated
31 url: http://my-tag-prometheus-server.carbon.svc.cluster.local
32 port: 80
33 path: ""
Learning kubernetes with kubectl and minikube locally. I can see this via kubectl:
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongodb-deployment-8f6675bc5-tjwsb 1/1 Running 0 20s 10.1.43.14 chris-x1 <none> <none>
pod/mongo-express-78fcf796b8-9gbsd 1/1 Running 0 20s 10.1.43.15 chris-x1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 29m <none>
service/mongo-express-service LoadBalancer 10.152.183.254 <pending> 8081:30000/TCP 20s app=mongo-express
service/mongodb-service ClusterIP 10.152.183.115 <none> 27017/TCP 20s app=mongodb
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/mongodb-deployment 1/1 1 1 20s mongodb mongo app=mongodb
deployment.apps/mongo-express 1/1 1 1 21s mongo-express mongo-express app=mongo-express
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/mongodb-deployment-8f6675bc5 1 1 1 20s mongodb mongo app=mongodb,pod-template-hash=8f6675bc5
replicaset.apps/mongo-express-78fcf796b8 1 1 1 21s mongo-express mongo-express app=mongo-express,pod-template-hash=78fcf796b8
But when I lanuch minikube dashboard I don't see any pods, deployments, services etc...? Its like they are running off of different clusters. If I paste the YAML configs directly into minikube dashboard, then I can see everything. So strange... Why?
I can use the minikube kubectl command, but that doesn't seem like thats how this should work.
Running Ubuntu 20, kubectl 1.20, and minikube 1.17.
Turns out I was set to run off of microk8s-cluster instead of minikube.
chris#chris-x1 /v/w/p/k8s> kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
So I needed to follow these steps to access a dashboard: https://microk8s.io/docs/addon-dashboard
I have deployed kubernetes and the dashboard onto a compute instance in Oracle cloud.
I have the dashboard installed with grafana onto my compute instance.
NAME READY STATUS RESTARTS AGE
po/etcd-mst-instance1 1/1 Running 0 1h
po/heapster-7856f6b566-rkfx5 1/1 Running 0 1h
po/kube-apiserver-mst-instance1 1/1 Running 0 1h
po/kube-controller-manager-mst-instance1 1/1 Running 0 1h
po/kube-dns-d879d6bcb-b9zjf 3/3 Running 0 1h
po/kube-flannel-ds-lgklw 1/1 Running 0 1h
po/kube-proxy-g6vxm 1/1 Running 0 1h
po/kube-scheduler-mst-instance1 1/1 Running 0 1h
po/kubernetes-dashboard-dd5c889c-6vphq 1/1 Running 0 1h
po/monitoring-grafana-5d4d76cd65-p7n5l 1/1 Running 0 1h
po/monitoring-influxdb-787479f6fd-8qkg2 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/heapster ClusterIP 10.98.200.184 <none> 80/TCP 1h
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
svc/kubernetes-dashboard ClusterIP 10.107.155.3 <none> 443/TCP 1h
svc/monitoring-grafana ClusterIP 10.96.130.226 <none> 80/TCP 1h
svc/monitoring-influxdb ClusterIP 10.105.163.213 <none> 8086/TCP 1h
I am trying to access the dashboard via SSH and did the below in my local computer:
ssh -L localhost:8001:172.31.4.117:6443 opc#xxxxxxxx
However, it tells me this error :
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Im not sure what is the best way to access the dashboard. I am new at k8s and still at a beginner stage so would want to consult as I have also tried doing kubectl proxy on my local computer but when i try to access 127.0.0.1 it gives me this error:
I0804 17:01:28.902675 77193 logs.go:41] http: proxy error: dial tcp [::1]:8080: connect: connection refused
Would really appreciaate any help and thank you
Kubernetes includes a web dashboard that can be used for basic management operations.
Once Dashboard is installed on your Kubernetes cluster, it can be accessed in a few different ways.
I prefer to use the kubectl proxy from the command line to access Kubernetes Dashboard.
Kubectl does for you: authentication with API server and forward traffic between
your cluster (with Dashboard deployed inside) and your web browser.
Please notice that kubectl does it for a local running web browser, as it is running on
a localhost.
From the command line:
kubectl proxy
Next, start browsing this address:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
In case Kubernetes API server is exposed and accessible, you may try:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
where master-ip is the IP address of your Kubernetes master node where API is running.
On single node setup, another way is use NodePort configuration to access Dashboard.
I found it on dashboard wiki:
Here is a sample of configuration to consider and adapt to your needs:
apiVersion: v1
...
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "343478"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head
spec:
clusterIP: <your-cluster-ip>
externalTrafficPolicy: Cluster
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
After applying configuration, check for the exposed port for https using the command:
kubectl -n kube-system get service kubernetes-dashboard
If it returned for example 31707, you could start your browser with:
https://<master-ip>:31707
I was inspired by web ui dashboard guide and accessing dashboard wiki.
I've been trying to use Traefik as an Ingress Controller on Google Cloud's container engine.
I got my http deployment/service up and running (when I exposed it with a normal LoadBalancer, it was answering fine).
I then removed the LoadBalancer, and followed this tutorial: https://docs.traefik.io/user-guide/kubernetes/
So I got a new traefik-ingress-controller deployment and service, and an ingress for traefik's ui which I can access through the kubectl proxy.
I then create my ingress for my http service, but here comes my issue: I can't find a way to expose that externally.
I want it to be accessible by anybody via an external IP.
What am I missing?
Here is the output of kubectl get --export all:
NAME READY STATUS RESTARTS AGE
po/mywebservice-3818647231-gr3z9 1/1 Running 0 23h
po/mywebservice-3818647231-rn4fw 1/1 Running 0 1h
po/traefik-ingress-controller-957212644-28dx6 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/mywebservice 10.51.254.147 <none> 80/TCP 1d
svc/kubernetes 10.51.240.1 <none> 443/TCP 1d
svc/traefik-ingress-controller 10.51.248.165 <nodes> 80:31447/TCP,8080:32481/TCP 25m
svc/traefik-web-ui 10.51.248.65 <none> 80/TCP 3h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mywebservice 2 2 2 2 1d
deploy/traefik-ingress-controller 1 1 1 1 3h
NAME DESIRED CURRENT READY AGE
rs/mywebservice-3818647231 2 2 2 23h
rs/traefik-ingress-controller-957212644 1 1 1 3h
You need to expose the Traefik service. Set the service spec type to LoadBalancer. Try the below service file that i've used previously:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
tier: proxy
ports:
- port: 80
targetPort: 80