Running dashboard inside play-with-kubernetes - kubernetes

I'm trying to start a dashboard inside play-with-kubernetes
Commands I'm running:
start admin node
kubeadm init --apiserver-advertise-address $(hostname -i)
start network
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
allow master to hold nodes(?)
kubectl taint nodes --all node-role.kubernetes.io/master-
Wait until dns is up
kubectl get pods --all-namespaces
join node (copy from admin startup, not from here)
kubeadm join --token 43d52c.d72308004d523ac4 10.0.21.3:6443
download and run dashboard
curl -L -s https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml | sed 's/targetPort: 8443/targetPort: 8443\n type: NodePort/' | \
kubectl apply -f -
Unfortunatelly dashboard is not available.
What should I do to correctly deploy it inside play-with-kubernetes?

You need heapster for dashboard to work. So execute these as well:
kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/rbac/heapster-rbac.yaml
kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/influxdb/heapster.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
Also, unless you want to fiddle with authentication you need to grant dashboard admin privileges with something like this:
kubectl create clusterrolebinding insecure-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
Eventually a port link will appear (30xxx) but you will need to change the url scheme to https from http - and convince your browser that you don't care about the insecure certificate.
You should have a working dashboard now. Piece of cake ;)

Related

Kubernetes: How to get other pods' name from within a pod?

I'd like to find out the other pods' name running in the same single-host cluster. All pods are single-application containers. I have pod A (written in Java) that acts as a TCP/IP server and a few other pods (written in C++) connect to the server.
In pod A, I can get IP address of clients (other pods). How do I get their pods' name? I can't run kubectl commands because pod A has no kubectl installed.
Thanks,
You can directly call kube-apiserver with cURL.
First you need to have a serviceaccount binded to clusterrole to be able to send requests to apiserver.
kubectl create clusterrole listpods --verb=get,list,watch --resource=pods
kubectl create clusterrolebinding listpods --clusterrole=listpods --serviceaccount=default:default
Get a shell inside a container
kubectl exec -it deploy/YOUR_DEPLOYMENT -- sh
Define necessary parameters for your cURL, run below commands inside container
APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
Send pods list request to apiserver
curl -s -k -H "Authorization: Bearer $TOKEN" -H 'Accept: application/json' $APISERVER/api/v1/pods | jq "[ .items[] | .metadata.name ]"
Done ! It will return you a json list of "kubectl get pods"
For more examples, you can check OpenShift RestAPI Reference. Also, if you are planning to do some programmatic stuff, I advice you to checkout official kubernetes-clients.
Credits for jq improvement to #moonkotte

Error: template: inject:469: function "appendMultusNetwork" not defined

istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
While trying to inject istio sidecar container manually to pod. I got error -
Error: template: inject:469: function "appendMultusNetwork" not defined
https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/
As mentioned in comments I have tried to reproduce your issue on gke with istio 1.7.4 installed.
I've followed the documentation you mentioned and it worked without any issues.
1.Install istioctl and istio default profile
curl -sL https://istio.io/downloadIstioctl | sh -
export PATH=$PATH:$HOME/.istioctl/bin
istioctl install
2.Create samples/sleep directory and create sleep.yaml, for example with vi.
3.Create local copies of the configuration.
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.config}' > inject-config.yaml
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.values}' > inject-values.yaml
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
4.Apply it with istioctl kube-inject
istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
5.Verify that the sidecar has been injected
kubectl get pods
NAME READY STATUS RESTARTS AGE
sleep-5768c96874-m65bg 2/2 Running 0 105s
So there are few things worth to check as it might might cause this issue::
Could you please check if you executed all your commands correctly?
Maybe you run older version of istio and you should follow older
documentation?
Maybe you changed something in above local copies of the
configuration and that cause the issue? If you did what exactly did you change?

Kubernetes dashboard

I have been able to successfully setup kubernetes on my Centos 7 server.
On trying to get the dashboard working after following the documentation, running 'kubectl proxy' it
attempts to run using 127.0.0.1:9001 and not my server ip. Do this mean I cannot access kubernetes dashboard outside the server?
I need help on getting the dashboard running using my public ip
You can specify on which address you want to run kubectl proxy, i.e.
kubectl proxy --address <EXTERNAL-IP> -p 9001
Starting to serve on 100.105.***.***:9001
You can also use port forwarding to access the dashboard.
kubectl port-forward --address 0.0.0.0 pod/dashboard 8888:80
This will listen port 8888 on all addresses and route traffic directly to your pod.
For instance:
rsha:~$ kubectl port-forward --address 0.0.0.0 deploy/webserver 8888:80
Forwarding from 0.0.0.0:8888 -> 80
In another terminal running
rsha:~$ curl 100.105.***.***:8888
<html><body><h1>It works!</h1></body></html>
As I understand, you would like to access the dashboard from your laptop. What you should do is create an admin account called k8s-admin:
$ kubectl --namespace kube-system create serviceaccount k8s-admin
$ kubectl create clusterrolebinding k8s-admin --serviceaccount=kube-system:k8s-admin --clusterrole=cluster-admin
Then setup kubectl on your laptop, e.g. for macOS it looks like this (see documentation):
$ brew install kubernetes-cli
Setup a proxy to your workstation. Create a ~/.kube directory on your laptop and then scp the ~/.kube/config file from the k8s (Kubernetes) master to your ~/.kube directory.
Then get the authentication token you need to connect to the dashboard:
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep k8s-admin | awk '{print $1}')
Now start the proxy:
$ kubectl proxy
Now open the dashboard by going to:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
You should see the Token option and then copy-paste the token from the prior step and Sign-In.
You can follow this tutorial.

Trying to add a node to the kubernetes cluster using kubeadm join is showing an error

I am using kubeadm to create a kubernetes cluster. Kubeadm init was successful. But when I try to add nodes, I am seeing this error. Any direction is highly appreciated.
kubeadm join 10.127.0.142:6443 --token ddd0 --discovery-token-ca-cert-hash sha256:ddddd
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
[discovery] Trying to connect to API Server "10.127.0.142:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.127.0.142:6443"
[discovery] Requesting info from "https://10.127.0.142:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.127.0.142:6443"
[discovery] Successfully established connection with API Server "10.127.0.142:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:mq0t2n" cannot get configmaps in the namespace "kube-system"
I'm pretty sure you have version mismatch on your master and worker nodes.
Follow this official instruction to upgrade cluster to the same versions.
Second solution is to downgrade worker node to master node versions
I started seeing this type of message in 1.12 since Dec 5th, right after the release of 1.13.
I was using a scripted install, so there was no version mismatch or anything between my master and worker nodes.
If 1.12 is still the desired version, I posted a fix for that permission issue: k8s 1.12 kubeadm join permission fix.
The fix is also provided below:
Perform STEPS 1, 2, 3, 4 on Master node.
Perform STEP 5 on Worker node.
STEP 1: Create a new "kubelet-config-1.12" ConfigMap from existing "kubelet-config-1.13" ConfigMap:
$ kubectl get cm --all-namespaces
$ kubectl -n kube-system get cm kubelet-config-1.13 -o yaml --export > kubelet-config-1.12-cm.yaml
$ vim kubelet-config-1.12-cm.yaml #modify at the bottom:
#name: kubelet-config-1.12
#delete selfLink
$ kubectl -n kube-system create -f kubelet-config-1.12-cm.yaml
STEP 2: Get token prefix:
$ sudo kubeadm token list #if no output, then create a token:
$ sudo kubeadm token create
TOKEN ... ...
a0b1c2.svn4my9ifft4zxgg ... ...
# Token prefix is "a0b1c2"
STEP 3: Create a new "kubeadm:kubelet-config-1.12" role from existing "kubeadm:kubelet-config-1.13" role:
$ kubectl get roles --all-namespaces
$ kubectl -n kube-system get role kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-role.yaml
$ vim kubeadm\:kubelet-config-1.12-role.yaml #modify the following:
#name: kubeadm:kubelet-config-1.12
#resourceNames: kubelet-config-1.12
#delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
$ kubectl -n kube-system create -f kubeadm\:kubelet-config-1.12-role.yaml
STEP 4: Create a new rolebinding "kubeadm:kubelet-config-1.12" from existing "kubeadm:kubelet-config-1.13" rolebinding:
$ kubectl get rolebindings --all-namespaces
$ kubectl -n kube-system get rolebinding kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-rolebinding.yaml
$ vim kubeadm\:kubelet-config-1.12-rolebinding.yaml #modify the following:
#metadata/name: kubeadm:kubelet-config-1.12
#roleRef/name: kubeadm:kubelet-config-1.12
#delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
- apiGroup: rbac.authorization.k8s.io #add these 3 lines as another group in "subjects:" at the bottom, with the 6 character token prefix from STEP 2
kind: Group
name: system:bootstrap:a0b1c2
$ kubectl -n kube-system create -f kubeadm\:kubelet-config-1.12-rolebinding.yaml
STEP 5: Run kubeadm join from Worker node:
$ sudo kubeadm join --token <token> <master-IP>:6443 --discovery-token-ca-cert-hash sha256:<key-value>
# If you receive 2 ERRORS, run kubeadm join again with the following options:
$ sudo kubeadm join --token <token> <master-IP>:6443 --discovery-token-ca-cert-hash sha256:<key-value> --ignore-preflight-errors=FileAvailable--etc-kubernetes-bootstrap-kubelet.conf,FileAvailable--etc-kubernetes-pki-ca.crt
kubectl -n kube-system get role kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-role.yaml
#metadata/name: kubeadm:kubelet-config-1.12
#roleRef/name: kubeadm:kubelet-config-1.12
#delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
kubectl apply -f kubeadm:kubelet-config-1.12-role.yaml
kubectl -n kube-system get rolebinding kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-rolebinding.yaml
#metadata/name: kubeadm:kubelet-config-1.12
#roleRef/name: kubeadm:kubelet-config-1.12
#delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
kubectl apply -f kubeadm:kubelet-config-1.12-rolebinding.yaml
kubectl get configmap kubelet-config-1.13 -n kube-system -oyaml > kubelet-config-1.12
#metadata/name: kubelet-config-1.12
#roleRef/name: kubelet-config-1.12
#delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
kubectl apply -f kubelet-config-1.12
login to the node which you want to join and delete following files:
rm /etc/kubernetes/bootstrap-kubelet.conf
rm /etc/kubernetes/pki/ca.crt
now run the kubeadm join command

how to recreate all the resources from a Weave Net manifest?

Because some of my pods get stuck, I want to recreate a DaemonSet and a few security-related resources created by Weave Net plugin during kubeadm bootstrap.
I used the kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" command several times, but the pod status didn't change.
Here is the command to install Weave Net for Kubernetes cluster version 1.6+ (just to have a complete answer):
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
To delete all those resources, run the following command:
$ kubectl delete -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
To have more information about the root cause of the problem, check the state of pods:
$ kubectl describe pod weave-net-xspgn
$ kubectl describe pod kube-dns-86f4d74b45-9t4mj
$ kubectl describe pod kube-proxy-2gjj4
The kubelet log can also be very helpful:
# journalctl -u kubelet