In the output of helm status mychart, it show NAMESPACE in which chart is deployed that is NAMESPACE: default.
#=> helm status mychart
LAST DEPLOYED: Tue Sep 24 21:32:45 2019
NAMESPACE: default
STATUS: DEPLOYED
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
nginx-web-stg-55f55958-v2cxm 0/1 Pending 0 28m
tomcat-api-stg-6d54498fdd-cqctr 1/1 Running 0 28m
and if I run kubectl get all -A, It show NAMESPACE along with resouces name-
#=> kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
nginx pod/nginx-web-stg-55f55958-v2cxm 0/1 Pending 0 20m
tomcat pod/tomcat-api-stg-6d54498fdd-cqctr 1/1 Running 0 20m
In the Kubectl output, column for NAMESPACE is included in output but not in helm status mychart. I wish to print resources along with NAMESPACE in helm status mychart output.
The output formats of kubectl and helm are completely unrelated. I'm not aware that you can modify the output of helm status in any way to make it display the namespace with each resource.
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
According to issue simply execute:
$ kubectl api-resources -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -l release=your-chart-name --all-namespaces
Sample output:
user#home:~$ kubectl api-resources -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -l release=terrific-ferret --all-namespaces
NAME STATUS MESSAGE ERROR
componentstatus/scheduler Healthy ok
componentstatus/etcd-0 Healthy {"health": "true"}
componentstatus/etcd-1 Healthy {"health": "true"}
componentstatus/controller-manager Healthy ok
NAMESPACE NAME DATA AGE
default configmap/terrific-ferret-mysql-test 1 12m
NAMESPACE NAME ENDPOINTS AGE
default endpoints/terrific-ferret-mysql aa.bb.cc.dd:port 12m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/terrific-ferret-mysql Bound pvc-896382d2 8Gi RWO standard 12m
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/terrific-ferret-mysql-86588b4646 1/1 Running 0 2m55s
NAMESPACE NAME TYPE DATA AGE
default secret/terrific-ferret-mysql Opaque 2 13m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/terrific-ferret-mysql ClusterIP xx.yy.zz.ww <none> 3306/TCP 13m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default deployment.apps/terrific-ferret-mysql 1 1 1 1 13m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/terrific-ferret-mysql-86 1 1 1 13m
We are using kubectl api-resources to list all supported resource types along with their shortnames.
Useful information you can find here: api-resources.
Useful blog: kubectlcheatsheet.
Related
I'm using Kubernetes version: 1.19.16 on bare metal Ubuntu-18.04lts server. When i tried to deploy the nginx-ingress yaml file it always fails with below errors.
Following steps followed to deploy nginx-ingress,
$ git clone https://github.com/nginxinc/kubernetes-ingress.git
cd kubernetes-ingress/deployments
kubernetes-ingress/deployments$ git branch
* main
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f deployment/nginx-ingress.yaml
deployment.apps/nginx-ingress created
$ kubectl get pods -n nginx-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-75c4bd64bd-mm52x 0/1 Error 2 21s 10.244.1.5 k8s-master <none> <none>
$ kubectl -n nginx-ingress get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-75c4bd64bd-mm52x 0/1 CrashLoopBackOff 12 38m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress 0/1 1 0 38m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-75c4bd64bd 1 1 0 38m
$ kubectl logs nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
W1003 04:53:02.833073 1 flags.go:273] Ignoring unhandled arguments: []
I1003 04:53:02.833154 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.1 PlusFlag=false
I1003 04:53:02.833158 1 flags.go:191] Commit=a8742472b9ddf27433b6b1de49d250aa9a7cb47e Date=2022-09-16T08:09:31Z DirtyState=false Arch=linux/amd64 Go=go1.18.5
I1003 04:53:02.844374 1 main.go:210] Kubernetes version: 1.19.16
F1003 04:53:02.846604 1 main.go:225] Error when getting IngressClass nginx: ingressclasses.networking.k8s.io "nginx" not found
$ kubectl describe pods nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m6s default-scheduler Successfully assigned nginx-ingress/nginx-ingress-75c4bd64bd-mm52x to k8s-worker-1
Normal Pulled 87s (x5 over 3m5s) kubelet Container image "nginx/nginx-ingress:2.3.1" already present on machine
Normal Created 87s (x5 over 3m5s) kubelet Created container nginx-ingress
Normal Started 87s (x5 over 3m5s) kubelet Started container nginx-ingress
Warning BackOff 75s (x10 over 3m3s) kubelet Back-off restarting failed container
Nginx Ingress controller Deployment file Link for the reference.
As I'm using kubernetes-ingress.git repository main branch, not sure whether main branch is compatible with my Kubernetes version or not.
Can anyone share some pointer to solve this?
I think you missed to install ingress-controller "NGINX" that is why it is not able to identify the same https://github.com/nginxinc/kubernetes-ingress/blob/main/deployments/common/ingress-class.yaml#L4
kubectl apply -f common/ingress-class.yaml
You can follow thie steps from this document: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
I can list pods in my prod namespace
kubectl get pods -n prod
NAME READY STATUS RESTARTS AGE
curl-pod 1/1 Running 1 (32m ago) 38m
web 1/1 Running 1 (33m ago) 38m
I got error
kubectl describe pods curl-pod
Error from server (NotFound): pods "curl-pod" not found
Get events show
Normal Scheduled pod/curl-pod Successfully assigned prod/curl-pod to minikube
Why?
kubernetes manages by namespace, so you must specify namespace otherwise kubernetes will use namespace default.
So, you must type:
kubectl describe pod/curl-pod -n prod
I tried installing Ansible AWX. However, AWX also installs PostgreSQL on the system (I am using kubernetes for AWX btw). I understand that PostgreSQL is one of the requirements for AWX.
Now, for another project, I have to install PostgreSQL (on Kubernetes itself). I looked up a method online and it is working. However, is there some way I can do it automatically, just like the installation of AWX?
Thanks,
Suhas
This can be achieved by using the awx-operator. Below is a Demo installation of Helm. By default awx and PG db are located on the same worker node, but this requires a default SC
Helm Deployment
Configuring Helm sources for awx-operator
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm repo add awx-operator https://ansible.github.io/awx-operator/
"awx-operator" has been added to your repository.
┌──[root#vms81.liruilongs.github.io]──[~/AWK]
└─$helm repo update
Grab the latest from your diagram repository as we grab it...
... Successfully get updates from the "liruilong_repo" chart repository
... Successfully get updates from the "elastic" chart library
... Successfully obtained updates from the "prometheus-community" chart repository
... Successfully obtained updates from the "azure" chart repository
... Unable to get updates from "ali" chart repository (https://apphub.aliyuncs.com).
Failed to fetch https://apphub.aliyuncs.com/index.yaml: 504 gateway timeout
... Successfully getting updates from the "awx-operator" chart library
... Successfully fetching updates from the "stable" chart library
Update completed. ⎈ Have fun! ⎈
Search awx-operator for Chart
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm search repo awx-operator
NAME CHART VERSION APP VERSION DESCRIPTION
awx-operator/awx-operator 0.30.0 0.30.0 A Helm chart for the AWX Operator
Custom parameter installation helm install my-awx-operator awx-operator/awx-operator -n awx --create-namespace -f myvalues.yaml.
If you use a custom installation, you need to enable the corresponding switches in myvalues.yaml, you can configure HTTPS, standalone PG database, LB, LDAP authentication, etc. The file template can be found in the chart package under pull, and use the value.yaml inside for the template.
We use the default configuration here to install, no need to specify a configuration file.
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm install -n awx --create-namespace my-awx-operator awx-operator/awx-operator
Name: my-awx-operator
Last deployed. mon oct 10 16:29:24 2022
namespace: awx
Status: Deployed
Revision: 1
Test suite: none
Notes.
AWX operator is installed in Helm Chart version 0.30.0.
┌──[root#vms81.liruilongs.github.io]──[~/AWK]
└─$
After looking at the POD status
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-postgres-13-0 0/1 Pending 0 105s
awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 128m
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 5m48s
awx-operator-controller-manager-metrics-service ClusterIP 10.107.17.167 <none> 8443/TCP 132m
pg corresponding pod: awx-demo-postgres-13-0 pending now, look at the events
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pods awx-demo-postgres-13-0 | grep -i -A 10 event
Event.
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x8 over 7m31s) default-scheduler 0/3 nodes are available: 3 pods have unbound direct PersistentVolumeClaims.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
name status volume capacity access mode storage class age
postgres-13-awx-demo-postgres-13-0 Pending 10m
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Event.
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 82s (x42 over 11m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
No resources found
OK ,the reason for Pending is that there is no default SC
For stateful applications, we need to create a default SC (dynamic volume provisioning) before generating a statefulset, which will dynamically handle the creation of PVs and PVCs and generate data storage for PGs, so we need to create a SC here.
Here, for convenience, we use local storage as the back-end storage. In general, PV can only be network storage and does not belong to any Node, so it is a bit more by way of NFS, and the SC will specify the allocator through the provisioner field. After the storageClass is created, the user uses the default SC's allocation storage when defining the pvc.
To confirm successful creation
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
name provisioner reclaimpolicy volumebindingmode allowvolumeexpansion age
local-path rancher.io/local-path delete WaitForFirstConsumer false 2m6s
Set to default SC:
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl patch storageclass local-path -p '{"metadata": {"comments":{"storageclass.kubernetes.io/is-default-class": "true"}}'
storageclass.storage.k8s.io/local-path patched
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-postgres-13-0 0/1 Pending 0 46m
awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 173m
Export yaml file, delete and recreate
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc postgres-13-awx-demo-postgres-13-0 -o yaml > postgres-13-awx-demo-postgres-13-0.yaml
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl delete -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim "postgres-13-awx-demo-postgres-13-0" deleted
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim/postgres-13-awx-demo-postgres-13-0 created
Check the status of the pvc, here you need to wait a while, Bound means it has been bound.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-13-awx-demo-postgres-13-0 Pending local-path 3s
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForPodScheduled 42s persistentvolume-controller waiting for pod awx-demo-postgres-13-0 to be scheduled
Normal ExternalProvisioning 41s persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
Normal Provisioning 41s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 External provisioner is provisioning volume for claim "awx/postgres-13-awx-demo-postgres-13-0"
Normal ProvisioningSucceeded 39s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 Successfully provisioned volume pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-13-awx-demo-postgres-13-0 Bound pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO local-path 53s
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO Delete Bound awx/postgres-13-awx-demo-postgres-13-0 local-path 54s
Look at the status of the POD, here the PG-DB related POD is created successfully
Here you need to wait a while, you will see the Pods are normal
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-65d9bf775b-hc58x 4/4 Running 0 79m
awx-demo-postgres-13-0 1/1 Running 0 143m
awx-operator-controller-manager-79ff9599d8-m7t8k 2/2 Running 0 81m
View SVC Access Test
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 143m
awx-demo-service NodePort 10.104.176.210 <none> 80:30066/TCP 79m
awx-operator-controller-manager-metrics-service ClusterIP 10.108.71.67 <none> 8443/TCP 82m
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$curl 192.168.26.82:30066
<!doctype html><html lang="en"><head><script nonce="cw6jhvbF7S5bfKJPsimyabathhaX35F5hIyR7emZNT0=" type="text/javascript">window.....
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$
Get Password
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get secrets
NAME TYPE DATA AGE
awx-demo-admin-password Opaque 1 146m
awx-demo-app-credentials Opaque 3 82m
awx-demo-broadcast-websocket Opaque 1 146m
awx-demo-postgres-configuration Opaque 6 146m
awx-demo-receptor-ca kubernetes.io/tls 2 82m
awx-demo-receptor-work-signing Opaque 2 82m
awx-demo-secret-key Opaque 1 146m
awx-demo-token-sc92t kubernetes.io/service-account-token 3 82m
awx-operator-controller-manager-token-tpv2m kubernetes.io/service-account-token 3 84m
default-token-864fk kubernetes.io/service-account-token 3 4h32m
redhat-operators-pull-secret Opaque 1 146m
sh.helm.release.v1.my-awx-operator.v1 helm.sh/release.v1 1 84m
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$echo $(kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode)
tP59YoIWSS6NgCUJYQUG4cXXJIaIc7ci
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$
Access test
The default service is published as NodePort, so we can access it from any subnet IP via node plus port:http://192.168.26.82:30066/#/login
I have followed the steps as described in this link.
When i am on section of helm install (Step 2), and trying to run:
helm install --name web ./demo
I am getting the following error:
Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
Expected Result: It should install and deploy the chart.
This issue relates to your kubernetes configuration and not to your helm.
Assume you are also not able see outputs from other helm commands like helm list , etc.
Lots of people have this issue because of not properly configured CNI(typically this is calico). And sometimes this happens because of your kubeconfig absence.
Solutions are:
migrate from calico to flannel
Change the --pod-network-cidr for calico from 192.168.0.0/16 to 172.16.0.0/16 when using kubeadm to init cluster, like kubeadm init --pod-network-cidr=172.16.0.0
More related info you han find on similar github helm issue
Simple single-machine example:
1) kubeadm init --pod-network-cidr=172.16.0.0/16
2) kubectl taint nodes --all node-role.kubernetes.io/master-
3) kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
4)install helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
5)create and install chart
$ helm create demo
Creating demo
$ helm install --name web ./demo
NAME: web
LAST DEPLOYED: Tue Jul 16 10:44:15 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
web-demo-6986c66d7d-vctql 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-demo ClusterIP 10.106.140.176 <none> 80/TCP 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
6)result
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-demo-6986c66d7d-vctql 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
service/web-demo ClusterIP 10.106.140.176 <none> 80/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-demo 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-demo-6986c66d7d 1 1 1 75s
You can find more info in how to configure helm and kubernetes itself in Get Started With Kubernetes Using Minikube article
I am trying to get the metrics in Kubernetes dashboard. For that I'm running the influxdb and heapster pod in my kube-system namespace. I checked the status of pods using the command kubectl get pods -n kube-system. Here is the link which I was followed But heapster shows the logs as
E1023 13:41:07.915723 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: Get https://kubernetes.default/api/v1/nodes?resourceVersion=0: dial tcp: i/o timeout
Could anybody suggest where might be I will do the changes in my configurations?
Looks like the heapster cannot talk to you kube-apiserver through the kubernetes service on your default namespace. A few of things, you can try:
Check that the service is defined in the default namespace:
$ kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 92d
Check that all your kube-proxy pods are running ok:
$ kubectl -n kube-system -l=k8s-app=kube-proxy get pods
NAME READY STATUS RESTARTS AGE
kube-proxy-xxxxx 1/1 Running 0 4d18h
...
Check that all your overlay pods are running. For example for calico
$ kubectl -n kube-system -l=k8s-app=calico-node get pods
NAME READY STATUS RESTARTS AGE
calico-node-88fgd 2/2 Running 3 4d21h
...