kubernetes dashboard will not load - kubernetes

I am completely new to Kubernetes, so go easy on me.
I am running kubectl proxy but am only seeing the JSON output. Based on this discussion I attempted to set the memory limits by running:
kubectl edit deployment kubernetes-dashboard --namespace kube-system
I then changed the container memory limit:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
spec:
containers:
- image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1
imagePullPolicy: IfNotPresent
livenessProbe:
...
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
memory: 1Gi
I still only get the JSON served when I save that and visit http://127.0.0.1:8001/ui
Running kubectl logs --namespace kube-system kubernetes-dashboard-665756d87d-jssd8 I see the following:
Starting overwatch
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization
Successful initial request to the apiserver, version: v1.10.0
Generating JWE encryption key
New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
Initializing JWE encryption key from synchronized object
Creating in-cluster Heapster client
Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Serving insecurely on HTTP port: 9090
I read through a bunch of links from a Google search on the error but nothing really worked.
Key components are:
Local: Ubuntu 18.04 LTS
minikube: v0.28.0
Kubernetes Dashboard: 1.8.3
Installed via:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Halp!

Have you considered using the minikube dashboard? You can reach it by:
minikube dashboard
Also you will get json on http://127.0.0.1:8001/ui because it is deprecated, so you have to use full proxy URL as it states in the dashboard github page.
If you still want to use this 'external' dashboard for some future not minikube related projects or there is some other reason I don't know about you can reach it by:
kubectl proxy
and then:
http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
note that in the documentation it is https which is not correct in this case (might be documentation error or it might be clarified in the documentation part which I suggest you read if you need further information on web UI).
Hope this helps.

Related

Prometheus returns error context deadline exceeded

I deployed Prometheus with an Helm chart from Rancher. Targets such as Alertmanager, Prometheus, Grafana, Node-exporter, Kubelet etc. are configured automatically. The endpoint from alertmanager refers to the IP address of the specific pod for example. I also configured multiple targets successfully like Jira and Confluence.
Since the service external-dns is running in the namespace kube-system, it's also configured automatically. But only this service is getting the error Context deadline exceeded.
I checked in a random pod if those metrics are accessible by running the command curl -s http://<IP-ADDRESS-POD>:7979/metrics. Also did this with the service ip address (kubectl get service external-dns and curl-s http://<IP-ADDRESS-SVC>:7979/metrics).
Both of these curl commands returned the metrics within a second. So increasing the scrape timeout won't help.
But when I exec in the Prometheus container and use the promtool debug metrics command it shows the same behaviour like in my browser. The external-dns returns a timeout with both of the IP addresses and if I try this with another target it just returns the metrics.
I also don't think it's a SSL issue, because I already injected the correct CA bundle for the targets Jira and Confluence.
So anybody an idea? :)
I had to edit the NetworkPolicy in the kube-system namespace. The containers from the cattle-monitoring-system namespace are now allowed to access the containers from the kube-system namespace. You can upload your NetworkPolicies here and it visualizes which resources has access or not. The NetworkPolicy looks like this now:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: cattle-monitoring-system
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress

Unable to access minikube IP address

I am an absolute beginner to Kubernetes, and I was following this tutorial to get started. I have managed writing the yaml files. However once I deploy it, I am not able to access the web app.
This is my webapp yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
apiVersion: v1
kind: Service
metadata:
name: webapp-servicel
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30200
When I run the command : kubectl get node
When I run the command: kubectl get pods, i can see the pods running
kubectl get svc
I then checked the logs for webapp, I dont see any errors
I then checked the details logs by running the command: kubectl describe pod podname
I dont see any obvious errors in the result above, but again I am not experienced enough to check if there is any config thats not set properly.
Other things I have done as troubleshooting
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and all relevant folders, and run everything again.
pinged the ip address directly from command line, and cannot reach.
I would appreciate if someone can help me fix this.
Try these 3 options
can you do the kubectl get node -o wide and get the ip address of node and then open in web browser NODE_IP_ADDRESS:30200
Alternative you can run this command minikube service <SERVICE_NAME> --url which will give you direct url to access application and access the url in web browser.
kubectl port-forward svc/<SERVICE_NAME> 3000:3000
and access application on localhost:3000
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and .kube and run everything again.
pinged the ip address directly from command line, and cannot reach.
I suggest you to try port forwarding
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
kubectl port-forward svc/x-service NodePort:Port
I got stuck here as well. After looking through some of the gitlab issues, I found a helpful tip about the minikube driver. The instructions for starting minikub are incorrect in the video if you used
minikube start -driver docker
Here's how to fix your problem.
stop minikube
minikube stop
delete minikube (this deletes your cluster)
minikube delete
start up minikube again, but this time specify the hyperkit driver
minikube start --vm-driver=hyperkit
check status
minikube status
reapply your components in this order by.
kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo.yaml
kubectl aplly -f webapp.yaml
get your ip
minikube ip
open a browser, go to ip address:30200 (or whatever the port you defined was, mine was 30100). You should see an image of a dog and a form.
Some information in this SO post is useful too.
On Windows 11 with Ubuntu 20.04 WSL, it worked for me by using:
minikube start --driver=hyperv
On Windows 10 with Docker-Desktop one can even do not need to use minikube. Just enable Kubernetes in Docker-Desktop settings and use kubectl. Check the link for further information.
Using Kubernetes of Docker-Desktop I could simply reach webapp with localhost:30100. In my case, for some reason I had to pull mongo docker image manually with docker pull mongo:5.0.

kubernetes "unable to get metrics"

I am trying to autoscale a deployment and a statefulset, by running respectivly these two commands:
kubectl autoscale statefulset mysql --cpu-percent=50 --min=1 --max=10
kubectl expose deployment frontend --type=LoadBalancer --name=frontend
Sadly, on the minikube dashboard, this error appears under both services:
failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Searching online I read that it might be a dns error, so I checked but CoreDNS seems to be running fine.
Both workloads are nothing special, this is the 'frontend' deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: hubuser/repo
ports:
- containerPort: 3000
Has anyone got any suggestions?
First of all, could you please verify if the API is working fine? To do so, please run kubectl get --raw /apis/metrics.k8s.io/v1beta1.
If you get an error similar to:
“Error from server (NotFound):”
Please follow these steps:
1.- Remove all the proxy environment variables from the kube-apiserver manifest.
2.- In the kube-controller-manager-amd64, set --horizontal-pod-autoscaler-use-rest-clients=false
3.- The last scenario is that your metric-server add-on is disabled by default. You can verify it by using:
$ minikube addons list
If it is disabled, you will see something like metrics-server: disabled.
You can enable it by using:
$minikube addons enable metrics-server
When it is done, delete and recreate your HPA.
You can use the following thread as a reference.

kubernetes dones't reach internal registry

I've deployed an docker registry inside my kubernetes:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
registry-docker-registry ClusterIP 10.43.39.81 <none> 443/TCP 162m
I'm able to pull images from my machine (service is exposed via an ingress rule):
$ docker pull registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
...
Status: Downloaded newer image for registry-do...
When I'm trying to test it in order to deploy my image into the same kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: covid-backend
namespace: skaffold
spec:
replicas: 3
selector:
matchLabels:
app: covid-backend
template:
metadata:
labels:
app: covid-backend
spec:
containers:
- image: registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
name: covid-backend
ports:
- containerPort: 8080
Then, I've tried to deploy it:
$ cat pod.yaml | kubectl apply -f -
However, kubernetes isn't able to reach registry:
Extract of kubectl get events:
6s Normal Pulling pod/covid-backend-774bd78db5-89vt9 Pulling image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd"
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Failed to pull image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": rpc error: code = Unknown desc = failed to pull and unpack image "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to resolve reference "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to do request: Head https://registry-docker-registry.registry/v2/skaffold-covid-backend/manifests/sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd: dial tcp: lookup registry-docker-registry.registry: Try again
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Error: ErrImagePull
As you can see, kubernetes is not able to get access to the internal deployed registry...
Any ideas?
I would recommend to follow docs from k3d, they are here.
More precisely this one
Using your own local registry
If you don't want k3d to manage your registry, you can start it with some docker commands, like:
docker volume create local_registry
docker container run -d --name registry.local -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
These commands will start you registry in registry.local:5000. In order to push to this registry, you will need to add the line at /etc/hosts as we described in the previous section . Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you must connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.local. And then you can check you local registry.
Pushing to your local registry address
The registry will be located, by default, at registry.local:5000 (customizable with the --registry-name and --registry-port parameters). All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname but also be resolved from your host.
The easiest solution for this is to add an entry in your /etc/hosts file like this:
127.0.0.1 registry.local
Once again, this will only work with k3s >= v0.10.0 (see the section below when using k3s <= v0.9.1)
Local registry volume
The local k3d registry uses a volume for storying the images. This volume will be destroyed when the k3d registry is released. In order to persist this volume and make these images survive the removal of the registry, you can specify a volume with the --registry-volume and use the --keep-registry-volume flag when deleting the cluster. This will create a volume with the given name the first time the registry is used, while successive invocations will just mount this existing volume in the k3d registry container.
Docker Hub cache
The local k3d registry can also be used for caching images from the Docker Hub. You can start the registry as a pull-through cache when the cluster is created with --enable-registry-cache. Used in conjuction with --registry-volume/--keep-registry-volume can speed up all the downloads from the Hub by keeping a persistent cache of images in your local machine.
Testing your registry
You should test that you can
push to your registry from your local development machine.
use images from that registry in Deployments in your k3d cluster.
We will verify these two things for a local registry (located at registry.local:5000) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker's documentation for this).
Firstly, we can download some image (like nginx) and push it to our local registry with:
docker pull nginx:latest
docker tag nginx:latest registry.local:5000/nginx:latest
docker push registry.local:5000/nginx:latest
Then we can deploy a pod referencing this image to your cluster:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test-registry
labels:
app: nginx-test-registry
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test-registry
template:
metadata:
labels:
app: nginx-test-registry
spec:
containers:
- name: nginx-test-registry
image: registry.local:5000/nginx:latest
ports:
- containerPort: 80
EOF
Then you should check that the pod is running with kubectl get pods -l "app=nginx-test-registry".
Additionaly there are 2 github links worth visting
K3d not able resolve dns
You could try to use an answer provided by #rjshrjndrn, might solve your issue with dns.
docker images are not pulled from docker repository behind corporate proxy
Open github issue on k3d with same problem as yours.

Error from server (NotFound): replicationcontrollers "kubia-liveness" not found

I have created pods using below yaml.
apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
Then I created pods using the below command.
$ kubectl create -f kubia-liveness-probe.yaml
It created a pod successfully.
Then I'm trying to create load balancer service to access from the external world.
For that I'm using the below command.
$ kubectl expose rc kubia-liveness --type=LoadBalancer --name kubia-liveness-http
For this, I'm getting below error.
Error from server (NotFound): replicationcontrollers "kubia-liveness" not found
I'm not sure how to create replicationControllers. Could anybody please give me the command to do the same.
You are mixing two approaches here, one is creating stuff from yaml definition, which is good by it self (but bare in mind that it is really rare to create a POD rather then Deployment or ReplicationController) with exposing via CLI, which has some assumptions made (ie. it expects replication controller) and with these assumptions it creates appropriate service. My suggestion would be to go for creating Service from yaml manifest as well, so you can tailor it to fit your case.