T learn kubernetes I've built myself a bare metal cluster using 4 Raspberry PIs set it up using k3s:
# curl -sfL https://get.k3s.io | sh -
Added nodes etc., and everything comes up and I can see all the nodes and almost everything is working as expected.
I wanted to monitor the PIs so I installed the kube-prometheus-stack with helm:
$ kubectl create namespace monitoring
$ helm install prometheus --namespace monitoring prometheus-community/kube-prometheus-stack
And now everything looks fantastic:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-crd-s8zw5 0/1 Completed 0 5d21h
kube-system helm-install-traefik-rc9f2 0/1 Completed 1 5d21h
monitoring prometheus-prometheus-node-exporter-j85rw 1/1 Running 10 28h
kube-system metrics-server-86cbb8457f-mvbkl 1/1 Running 12 5d21h
kube-system coredns-7448499f4d-t7sp8 1/1 Running 13 5d21h
monitoring prometheus-prometheus-node-exporter-mmh2q 1/1 Running 9 28h
monitoring prometheus-prometheus-node-exporter-j4k4c 1/1 Running 10 28h
monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 10 28h
kube-system svclb-traefik-zkqd6 2/2 Running 6 19h
monitoring prometheus-prometheus-node-exporter-bft5t 1/1 Running 10 28h
kube-system local-path-provisioner-5ff76fc89d-g8tm6 1/1 Running 12 5d21h
kube-system svclb-traefik-jcxd2 2/2 Running 28 5d21h
kube-system svclb-traefik-mpbjm 2/2 Running 22 5d21h
kube-system svclb-traefik-7kxtw 2/2 Running 20 5d21h
monitoring prometheus-grafana-864598fd54-9548l 2/2 Running 10 28h
kube-system traefik-65969d48c7-9lh9m 1/1 Running 3 19h
monitoring prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 10 28h
monitoring prometheus-kube-state-metrics-76f66976cb-m8k2h 1/1 Running 6 28h
monitoring prometheus-kube-prometheus-operator-5c758db547-zsv4s 1/1 Running 6 28h
The services are all there:
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d21h
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d21h
kube-system metrics-server ClusterIP 10.43.80.65 <none> 443/TCP 5d21h
kube-system prometheus-kube-prometheus-kube-proxy ClusterIP None <none> 10249/TCP 28h
kube-system prometheus-kube-prometheus-kube-scheduler ClusterIP None <none> 10251/TCP 28h
monitoring prometheus-kube-prometheus-operator ClusterIP 10.43.180.73 <none> 443/TCP 28h
kube-system prometheus-kube-prometheus-coredns ClusterIP None <none> 9153/TCP 28h
kube-system prometheus-kube-prometheus-kube-etcd ClusterIP None <none> 2379/TCP 28h
kube-system prometheus-kube-prometheus-kube-controller-manager ClusterIP None <none> 10252/TCP 28h
monitoring prometheus-kube-prometheus-alertmanager ClusterIP 10.43.195.99 <none> 9093/TCP 28h
monitoring prometheus-prometheus-node-exporter ClusterIP 10.43.171.218 <none> 9100/TCP 28h
monitoring prometheus-grafana ClusterIP 10.43.20.165 <none> 80/TCP 28h
monitoring prometheus-kube-prometheus-prometheus ClusterIP 10.43.207.29 <none> 9090/TCP 28h
monitoring prometheus-kube-state-metrics ClusterIP 10.43.229.14 <none> 8080/TCP 28h
kube-system prometheus-kube-prometheus-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 28h
monitoring alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 28h
monitoring prometheus-operated ClusterIP None <none> 9090/TCP 28h
kube-system traefik LoadBalancer 10.43.20.17 192.168.76.200,192.168.76.201,192.168.76.202,192.168.76.203 80:31131/TCP,443:31562/TCP 5d21h
Namespaces:
$ kubectl get namespaces
NAME STATUS AGE
kube-system Active 5d21h
default Active 5d21h
kube-public Active 5d21h
kube-node-lease Active 5d21h
monitoring Active 28h
But I couldn't reach the grafana service.
Fair enough I thought, let's define an Ingress but it didn't work:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: prometheus-grafana
port:
number: 80
I have no idea why it isn't getting to the service and I can't really see where the problem is, although I understand containers, etc. (I first had everything running on docker swarm), I don't really know where, if anywhere, it would be shown in the logs.
I've spent the past couple of days trying all sorts of things and I finally found a hint about name spaces and problems calling services and something called "type: ExternalName".
I checked with curl from a pod inside the cluster and it is delivering the data inside of the "monitoring" name space but traefik can't get there or maybe even see it?
Having looked at the Traefik documentation I found this regarding namespaces but I have no idea where I would start to find the mentioned:
providers:
kubernetesCRD:
namespaces:
I'm assuming that k3s has set this up correctly as an empty array because I can't find anything on their site that tells me what to do with their combination of "klipper-lb" and "traefik".
I finally tried to define another service with an external name:
---
apiVersion: v1
kind: Service
metadata:
name: grafana-named
namespace: kube-system
spec:
type: ExternalName
externalName: prometheus-grafana.monitoring.svc.cluster.local
ports:
- name: service
protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: grafana-named
port:
number: 80
After 2-3 days, I've tried everything I can think of, google everything under the sun and I can't get to grafana from outside of the internal cluster nodes.
I am at a loss as to how I can make anything work with k3s. I installed Lens on my main PC and can see almost everything there, but I think that the missing metrics information requires an Ingress or something like that too.
What do I have to do to get traefik to do what I think is basically it's job, route incoming requests to the backend services?
I filed a bug report on github and one of the people there (thanks again brandond) pointed me in the right direction.
The network layer uses flannel to process the "in cluster" networking. The default implementation for that is something called "vxlan" and that is seemingly more complex with virtual ethernet adapters.
For my requirements (read: getting the cluster to even work), the solution was to change the implementation to "host-gw".
This is done by adding "--flannel-backend=host-gw" to the k3s.service option on the controller.
$ sudo systemctl edit k3s.service
### Editing /etc/systemd/system/k3s.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file
[Service]
ExecStart=
ExecStart=/usr/local/bin/k3s \
server \
'--flannel-backend=host-gw'
### Lines below this comment will be discarded
The first "ExecStart=" clears the existing default start command to enable it to be replaced by the 2nd one.
Now everything is working as I expected, and I can finally move forward with learning K8s.
I'll probably reactivate "vxlan" at some point and figure that out too.
Related
I have Prometheus installed on GCP, and i'm able to do a port-forward and access the Prometheus UI
Prometheus Pods, Events on GCP :
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get pods -n monitoring -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-5ccfb68647-8fjrz 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
grafana-5ccfb68647-h7vbr 1/1 Running 0 5h24m 10.76.0.9 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-hw6d5 1/1 Running 0 5h24m 10.76.0.4 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-znjs6 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
prometheus-prometheus-0 2/2 Running 0 5h24m 10.76.0.10 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-1 2/2 Running 0 5h24m 10.76.0.7 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-2 2/2 Running 0 5h24m 10.76.0.11 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get endpoints -n monitoring
NAME ENDPOINTS AGE
grafana 10.76.0.9:3000 28h
grafana-lb 10.76.0.9:3000 54m
prometheus-lb 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 155m
prometheus-nodeport 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 149m
prometheus-operated 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 28h
prometheus-operator 10.76.0.4:8080 29h
I've create a NodePort(port 30900), and also create a firewall allowing ingress to the port 30900
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get svc -n monitoring | grep prometheus-nodeport
prometheus-nodeport NodePort 10.80.7.195 <none> 9090:30900/TCP 146m
However, when i try to access using http://<node_ip>:30900,
the url is not accessible.
Also, telnet to the host/port is not working
Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11 30900
Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7
PING 10.76.0.7 (10.76.0.7): 56 data bytes
Request timeout for icmp_seq 0
Here is the yaml used to create the NodePort (in monitoring namespace)
apiVersion: v1
kind: Service
metadata:
name: prometheus-nodeport
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: 9090
selector:
prometheus: prometheus
Any ideas on what the issue is ?
How do i debug/resolve this ?
Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11
30900 Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7 PING
10.76.0.7 (10.76.0.7): 56 data bytes
The IP that you used above appeared to be in the Pod CIDR range when judged from the EndPoints result in the question. These are not the worker node IP, which means you need to first check if you can reach any of the worker node over the network that you reside now (home? vpn? internet?), and the worker node already has the correct port (30900) opened.
I have successfully deployed
prometheus via helm chart kube-prometheus-stack (https://prometheus-community.github.io/helm-charts)
prometheus-adapter via helm chart prometheus-adapter (https://prometheus-community.github.io/helm-charts)
using default configuration with slight customization.
I can access prometheus, grafana and alertmanager, query metrics and see fancy charts.
But prometheus-adapter keeps complaining on startup that it can't access/discover metrics:
I0326 08:16:52.266095 1 adapter.go:98] successfully using in-cluster auth
I0326 08:16:52.330094 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/run/serving-cert/tls.crt::/var/run/serving-cert/tls.key"
E0326 08:16:52.334710 1 provider.go:227] unable to update list of all metrics: unable to fetch metrics for query "{namespace!=\"\",__name__!~\"^container_.*\"}": bad_response: unknown response code 404
I've tried various prometheus URLs in the prometheus-adapter Deployment command line argument but the problem is more or less the same.
E.g. some of the URLs I've tried are
--prometheus-url=http://prometheus-operated.prom.svc:9090
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090
There are the following services / pods running:
$ kubectl -n prom get pods
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 16h
prometheus-adapter-76fcc79b7b-7xvrm 1/1 Running 0 10m
prometheus-grafana-559b79b564-bh85n 2/2 Running 0 16h
prometheus-kube-prometheus-operator-8556f58759-kl84l 1/1 Running 0 16h
prometheus-kube-state-metrics-6bfcd6f648-ms459 1/1 Running 0 16h
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 16h
prometheus-prometheus-node-exporter-2x6mt 1/1 Running 0 16h
prometheus-prometheus-node-exporter-bns9n 1/1 Running 0 16h
prometheus-prometheus-node-exporter-sbcjb 1/1 Running 0 16h
$ kubectl -n prom get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 16h
prometheus-adapter ClusterIP 10.0.144.45 <none> 443/TCP 16h
prometheus-grafana ClusterIP 10.0.94.160 <none> 80/TCP 16h
prometheus-kube-prometheus-alertmanager ClusterIP 10.0.0.135 <none> 9093/TCP 16h
prometheus-kube-prometheus-operator ClusterIP 10.0.170.205 <none> 443/TCP 16h
prometheus-kube-prometheus-prometheus ClusterIP 10.0.250.223 <none> 9090/TCP 16h
prometheus-kube-state-metrics ClusterIP 10.0.135.215 <none> 8080/TCP 16h
prometheus-operated ClusterIP None <none> 9090/TCP 16h
prometheus-prometheus-node-exporter ClusterIP 10.0.70.247 <none> 9100/TCP 16h
kubectl -n kube-system get deployment/metrics-server
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 15d
Prometheus-adapter helm chart gets deployed using the following values:
prometheus:
url: http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local
certManager:
enabled: true
What is the correct value for --prometheus-url for prometheus-adapter in my setup ?
The problem is related to the additional path used to expose prometheus via Ingress.
I was using additional path prefix /monitoring/prometheus/ for my ingress configuration.
The solution is to tell prometheus-adapter too, that prometheus is accessible incl. this path prefix.
Thus the following makes prometheus-adapter happy :
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090/monitoring/prometheus/
And now I can see custom metrics when executing
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Thank you "rock'n rolla" for giving some hints !
I'm using both helm charts (kube-prometheus-stack and prometheus-adapter).
additional path prefix that works for me is "/", but, prometheus url must be with the name of your helm-install parameter for stack ("helm install "). I'm using "prostack" as stack name. So finally, it works for me:
helm install <adapter-name> -n <namespace> --set prometheus.url=http://prostack-kube-prometheus-s-prometheus.monitoring.svc.cluster.local --set prometheus.port=9090 --set prometheus.path=/
I struggled with the same issue.
When i installed my prometheus server with Helm via the community chart.
I got a message like this:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
my-tag-prometheus-server.carbon.svc.cluster.local
Please note that is says the service is accessible on port 80 and not 9090 which i had not noticed.
So in my values.yaml file for the prometheus-adapter chart i specified port 80 and it worked (instead of the standard 9090)
28 # Url to access prometheus
29 prometheus:
30 # Value is templated
31 url: http://my-tag-prometheus-server.carbon.svc.cluster.local
32 port: 80
33 path: ""
I am trying to do Consul setup via Kubernetes, helm chart, https://www.consul.io/docs/k8s/helm
Based on my pre-Kubernetes knowledge: services, using Consul access via Consul Agent, running on each host and listening on hosts IP
Now, I deployed via Helm chart to Kubernetes cluster. First misunderstanding the terminology, Consul Agent vs Client in this setup? I presume it is the same
Now, set up:
Helm chart config (Terraform fragment), nothing specific to Clients/Agent's and their service:
global:
name: "consul"
datacenter: "${var.consul_config.datacenter}"
server:
storage: "${var.consul_config.storage}"
connect: false
syncCatalog:
enabled: true
default: true
k8sAllowNamespaces: ['*']
k8sDenyNamespaces: [${join(",", var.consul_config.k8sDenyNamespaces)}]
Pods, client/agent ones are DaemonSet, not in host network mode
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-server-0 1/1 Running 0 11h
consul-server-1 1/1 Running 0 11h
consul-server-2 1/1 Running 0 11h
consul-sync-catalog-8b688ff9b-klqrv 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
Services
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.consul <none> 11h
consul-dns ClusterIP 172.20.124.238 <none> 53/TCP,53/UDP 11h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 11h
consul-ui ClusterIP 172.20.131.29 <none> 80/TCP 11h
Question 1 Where is a service, to target Client (Agent) pods, but not Server's pods ? Did I miss it in helm chart?
My plan is, while I am not going to use Host (Kubernetes node) networking:
Find the Client/Agent service or make my own. So, it will be used by the Consul's user's. E.g., this service address I will specify for Consul template init pod of the Consul template. In the config consuming application
kubectl get pods --selector app=consul,component=client,release=consul
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
Optional: will add a topologyKeys in to agent service, so each consumer will not cross host boundary
Question 2 Is it right approach? Or it is different for Consul Kubernetes deployments
You can use the Kubernetes downward API to inject the IP of host as an environment variable for your pod.
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: 'consul:latest'
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- '/bin/sh'
- '-ec'
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never
See https://www.consul.io/docs/k8s/installation/install#accessing-the-consul-http-api for more info.
I've installed Kubernetes on ubuntu 18.04 using this article. Everything is working fine and then I tried to install Kubernetes dashboard with these instructions.
Now when I am trying to run kubectl proxy then the dashboard is not cumming up and it gives following error message in the browser when trying to access it using default kubernetes-dashboard URL.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
Following commands give this output where kubernetes-dashboard shows status as CrashLoopBackOff
$> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default amazing-app-rs-59jt9 1/1 Running 5 23d
default amazing-app-rs-k6fg5 1/1 Running 5 23d
default amazing-app-rs-qd767 1/1 Running 5 23d
default amazingapp-one-deployment-57dddd6fb7-xdxlp 1/1 Running 5 23d
default nginx-86c57db685-vwfzf 1/1 Running 4 22d
kube-system coredns-6955765f44-nqphx 0/1 Running 14 25d
kube-system coredns-6955765f44-psdv4 0/1 Running 14 25d
kube-system etcd-master-node 1/1 Running 8 25d
kube-system kube-apiserver-master-node 1/1 Running 42 25d
kube-system kube-controller-manager-master-node 1/1 Running 11 25d
kube-system kube-flannel-ds-amd64-95lvl 1/1 Running 8 25d
kube-system kube-proxy-qcpqm 1/1 Running 8 25d
kube-system kube-scheduler-master-node 1/1 Running 11 25d
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-kvz5d 1/1 Running 0 41m
kubernetes-dashboard kubernetes-dashboard-566f567dc7-w2sbk 0/1 CrashLoopBackOff 12 41m
$> kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ---------- <none> 443/TCP 25d
default nginx NodePort ---------- <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ---------- <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ---------- <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ---------- <none> 443/TCP 24d
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ====== <none> 443/TCP 25d
default nginx NodePort ====== <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ====== <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ====== <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ====== <none> 443/TCP 24d
$ kubectl get events -n kubernetes-dashboard
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Pulling pod/kubernetes-dashboard-566f567dc7-w2sbk Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s Warning BackOff pod/kubernetes-dashboard-566f567dc7-w2sbk Back-off restarting failed container
$ kubectl describe services kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.96.241.62
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints:
Session Affinity: None
Events: <none>
$ kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard
> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
>
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
> /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212
Any suggestions to fix this? Thanks in advance.
I noticed that the guide You used to install kubernetes cluster is missing one important part.
According to kubernetes documentation:
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here .
Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
For more information about flannel, see the CoreOS flannel repository on GitHub .
To fix this:
I suggest using the command:
sysctl net.bridge.bridge-nf-call-iptables=1
And then reinstall flannel:
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Update: After verifying the the /proc/sys/net/bridge/bridge-nf-call-iptables value is 1 by default ubuntu-18-04-lts. So issue here is You need to access the dashboard locally.
If You are connected to Your master node via ssh. It could be possible to use -X flag with ssh in order to launch we browser via ForwardX11. Fortunately ubuntu-18-04-lts has it turned on by default.
ssh -X server
Then install local web browser like chromium.
sudo apt-get install chromium-browser
chromium-browser
And finally access the dashboard locally from node.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Hope it helps.
I've been trying to use Traefik as an Ingress Controller on Google Cloud's container engine.
I got my http deployment/service up and running (when I exposed it with a normal LoadBalancer, it was answering fine).
I then removed the LoadBalancer, and followed this tutorial: https://docs.traefik.io/user-guide/kubernetes/
So I got a new traefik-ingress-controller deployment and service, and an ingress for traefik's ui which I can access through the kubectl proxy.
I then create my ingress for my http service, but here comes my issue: I can't find a way to expose that externally.
I want it to be accessible by anybody via an external IP.
What am I missing?
Here is the output of kubectl get --export all:
NAME READY STATUS RESTARTS AGE
po/mywebservice-3818647231-gr3z9 1/1 Running 0 23h
po/mywebservice-3818647231-rn4fw 1/1 Running 0 1h
po/traefik-ingress-controller-957212644-28dx6 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/mywebservice 10.51.254.147 <none> 80/TCP 1d
svc/kubernetes 10.51.240.1 <none> 443/TCP 1d
svc/traefik-ingress-controller 10.51.248.165 <nodes> 80:31447/TCP,8080:32481/TCP 25m
svc/traefik-web-ui 10.51.248.65 <none> 80/TCP 3h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mywebservice 2 2 2 2 1d
deploy/traefik-ingress-controller 1 1 1 1 3h
NAME DESIRED CURRENT READY AGE
rs/mywebservice-3818647231 2 2 2 23h
rs/traefik-ingress-controller-957212644 1 1 1 3h
You need to expose the Traefik service. Set the service spec type to LoadBalancer. Try the below service file that i've used previously:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
tier: proxy
ports:
- port: 80
targetPort: 80