Eror k8s nodeport not open tcp port on Host - kubernetes

I try to expose service in k8s.
When I launch kubectl expose command kubectl expose deployment demo-api-app -n default --type=NodePort --name=my-service everything is OK.
I have endpoints and a port to nodeport:
[root#mys-servver test_connect]# kubectl get endpoints my-service
NAME ENDPOINTS AGE
my-service 10.233.93.1:8081 4m31s
root#ppfxasmsnd1005 test_connect]# kubectl get svc my-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service NodePort 10.233.11.26 <none> 8081:31594/TCP 5m12s
[root#ppfxasmsnd1005 test_connect]#
Nodeport have to listen to 31594 but noting is listen ( I try the following comand in all worker/control plane)
[root#ys-servver test_connect]# netstat -tulpn | grep 31594
[root#ys-servver test_connect]#
I have nothing in kubeproxy logs. ( I try to increase log but doesn't work )
[root#ys-servver test_connect]# kubectl logs -n kube-system -l "k8s-app=kube-proxy"
I0216 08:15:40.569350 1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=524288
I0216 08:15:40.569546 1 config.go:317] "Starting service config controller"
I0216 08:15:40.569556 1 shared_informer.go:255] Waiting for caches to sync for service config
I0216 08:15:40.569599 1 config.go:226] "Starting endpoint slice config controller"
I0216 08:15:40.569614 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0216 08:15:40.569614 1 config.go:444] "Starting node config controller"
I0216 08:15:40.569623 1 shared_informer.go:255] Waiting for caches to sync for node config
I0216 08:15:40.670408 1 shared_informer.go:262] Caches are synced for service config
I0216 08:15:40.670446 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0216 08:15:40.670476 1 shared_informer.go:262] Caches are synced for node config
I0216 08:15:40.721944 1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=131072
I0216 08:15:40.722143 1 config.go:226] "Starting endpoint slice config controller"
...
I deploy with kubespray and I use iptables to kubeproxy
system: Redhat8 selinus disabled on all nodes
What's wrong
Ps:
[root#servver test_connect]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:16:20Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.5", GitCommit:"804d6167111f6858541cef440ccc53887fbbc96a", GitTreeState:"clean", BuildDate:"2022-12-08T10:08:09Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.25) exceeds the supported minor version skew of +/-1
[root#servver test_connect]# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root#servver test_connect]#
[root#servver test_connect]# sysctl -p
net.ipv4.ip_forward = 1
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.panic = 10
kernel.panic_on_oops = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv4.ip_local_reserved_ports = 30000-32767
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root#servver test_connect]#
I must nodeport open to all node

Related

Kafka connect api endpoint and problem with nginx ingress in second kubernetes namespace

In our kube cluster we have 2 namespaces kafka1 kafka2
kubectl get endpoints -n kafka2
NAME ENDPOINTS AGE
kafka2-akhq 10.233.76.160:8080 26d
kafka2-connect-api 10.233.67.2:8083,10.233.76.173:8083,10.233.90.52:8083 28m
and
kubectl get endpoints -n kafka1
NAME ENDPOINTS AGE
kafka1-akhq 10.233.67.157:8080 9d
kafka1-connect-api 10.233.67.3:8083,10.233.76.127:8083,10.233.90.4:8083 15h
everything is same expect 1 is replace by 2 in yaml
When I enter to ingress-nginx pod:
kubectl exec -n ingress-nginx ingress-nginx-controller -it -- bash
I can only curl second connector:
bash-5.1$ curl 10.233.67.2:8083
{"version":"3.2.0","commit":"38103ffaa962ef50","kafka_cluster_id":"KAEfPEuHSR-8znSgxlzJyQ"}bash-5.1$
when I try ip from first I got no response:
bash-5.1$ curl 10.233.67.3:8083
^C
Any idea how I can debug more information or how to fix this issue ?

Can we setup a k8s bare matal server to run Bind DNS server (named) and have an access to it from the outside on port 53?

I have setup a k8s cluster using 2 bare metal servers (1 master and 1 worker) using kubespray with default settings (kube_proxy_mode: iptables and dns_mode: coredns) and I would like to run a BIND DNS server inside to manage a couple of domain names.
I deployed with helm 3 an helloworld web app for testing. Everything works like a charm (HTTP, HTTPs, Let's Encrypt thought cert-manager).
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 22d v1.16.7
k8sslave Ready <none> 21d v1.16.7
I deployed with an Helm 3 chart an image of my BIND DNS Server (named) in default namespace; with a service exposing the port 53 of the bind app container.
I have tested the DNS resolution with a pod and the bind service; it works well. Here is the test of the bind k8s service from the master node:
kubectl -n default get svc bind -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
bind ClusterIP 10.233.31.255 <none> 53/TCP,53/UDP 4m5s app=bind,release=bind
kubectl get endpoints bind
NAME ENDPOINTS AGE
bind 10.233.75.239:53,10.233.93.245:53,10.233.75.239:53 + 1 more... 4m12s
export SERVICE_IP=`kubectl get services bind -o go-template='{{.spec.clusterIP}}{{"\n"}}'`
nslookup www.example.com ${SERVICE_IP}
Server: 10.233.31.255
Address: 10.233.31.255#53
Name: www.example.com
Address: 176.31.XXX.XXX
So the bind DNS app is deployed and is working fine through the bind k8s service.
For the next step; I followed the https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ documentation to setup the Nginx Ingress Controller (both configmap and service) to handle tcp/udp requests on port 53 and to redirect them to the bind DNS app.
When I test the name resolution from an external computer it does not work:
nslookup www.example.com <IP of the k8s master>
;; connection timed out; no servers could be reached
I digg into k8s configuration, logs, etc. and I found a warning message in kube-proxy logs:
ps auxw | grep kube-proxy
root 19984 0.0 0.2 141160 41848 ? Ssl Mar26 19:39 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster
journalctl --since "2 days ago" | grep kube-proxy
<NOTHING RETURNED>
KUBEPROXY_FIRST_POD=`kubectl get pods -n kube-system -l k8s-app=kube-proxy -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | head -n 1`
kubectl logs -n kube-system ${KUBEPROXY_FIRST_POD}
I0326 22:26:03.491900 1 node.go:135] Successfully retrieved node IP: 91.121.XXX.XXX
I0326 22:26:03.491957 1 server_others.go:150] Using iptables Proxier.
I0326 22:26:03.492453 1 server.go:529] Version: v1.16.7
I0326 22:26:03.493179 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0326 22:26:03.493647 1 config.go:131] Starting endpoints config controller
I0326 22:26:03.493663 1 config.go:313] Starting service config controller
I0326 22:26:03.493669 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0326 22:26:03.493679 1 shared_informer.go:197] Waiting for caches to sync for service config
I0326 22:26:03.593986 1 shared_informer.go:204] Caches are synced for endpoints config
I0326 22:26:03.593992 1 shared_informer.go:204] Caches are synced for service config
E0411 17:02:48.113935 1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-udp" (91.121.XXX.XXX:53/udp), skipping this externalIP: listen udp 91.121.XXX.XXX:53: bind: address already in use
E0411 17:02:48.119378 1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-tcp" (91.121.XXX.XXX:53/tcp), skipping this externalIP: listen tcp 91.121.XXX.XXX:53: bind: address already in use
Then I look for who was already using the port 53...
netstat -lpnt | grep 53
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1682/systemd-resolv
tcp 0 0 87.98.XXX.XXX:53 0.0.0.0:* LISTEN 19984/kube-proxy
tcp 0 0 169.254.25.10:53 0.0.0.0:* LISTEN 14448/node-cache
tcp6 0 0 :::9253 :::* LISTEN 14448/node-cache
tcp6 0 0 :::9353 :::* LISTEN 14448/node-cache
A look on the proc 14448/node-cache:
cat /proc/14448/cmdline
/node-cache-localip169.254.25.10-conf/etc/coredns/Corefile-upstreamsvccoredns
So coredns is already handling the port 53 which is normal cos it's the k8s internal DNS service.
In coredns documentation (https://github.com/coredns/coredns/blob/master/README.md) they talk about a -dns.port option to use a distinct port... but when I look into kubespray (which has 3 jinja templates https://github.com/kubernetes-sigs/kubespray/tree/release-2.12/roles/kubernetes-apps/ansible/templates for creating the coredns configmap, services etc. similar to https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns) everything is hardcoded with port 53.
So my question is : Is there a k8s cluster configuration/workaround so I can run my own DNS Server and exposed it to port 53?
Maybe?
Setup the coredns to use a a different port than 53 ? Seems hard and I'm really not sure this makes sense!
I can setup my bind k8s service to expose port 5353 and configure the nginx ingress controller to handle this 5353 port and redirect to the app 53 port. But this would require to setup iptables to route external DSN requests* received on port 53 to my bind k8s service on port 5353 ? What would be the iptables config (INPUT / PREROUTING or FORWARD)? Does this kind of network configuration would breakes coredns?
Regards,
Chris
I suppose Your nginx-ingress doesn't work as expected. You need Load Balancer provider, such as MetalLB, to Your bare metal k8s cluster to receive external connections on ports like 53. And You don't need nginx-ingress to use with bind, just change bind Service type from ClusterIP to LoadBalancer and ensure you got an external IP on this Service. Your helm chart manual may help to switch to LoadBalancer.

kube-proxy not able to list endpoints and services

I'm new to kubernetes, and I'm trying to create a cluster. But after I configure the master with the kubeadm command, I see there are some errors with the pods, and this results in a master that is always in a NotReady state.
All seems to originate from the fact that kube-proxy cannot list the endpoints and the services... and for this reason (or so I understand) cannot update the iptables.
Here is my kubectl version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
And here are the logs from the kube-proxy pod:
$ kubectl logs -n kube-system kube-proxy-xjxck
W0430 12:33:28.887260 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
W0430 12:33:28.913671 1 node.go:113] Failed to retrieve node info: Unauthorized
I0430 12:33:28.915780 1 server_others.go:147] Using iptables Proxier.
W0430 12:33:28.916065 1 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W0430 12:33:28.916089 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0430 12:33:28.917555 1 server.go:555] Version: v1.14.1
I0430 12:33:28.959345 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0430 12:33:28.960392 1 config.go:202] Starting service config controller
I0430 12:33:28.960444 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0430 12:33:28.960572 1 config.go:102] Starting endpoints config controller
I0430 12:33:28.960609 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
E0430 12:33:28.970720 1 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fh-ubuntu01.159a40901fa85264", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"fh-ubuntu01", UID:"fh-ubuntu01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:v1.EventSource{Component:"kube-proxy", Host:"fh-ubuntu01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf2a2e0639406264, ext:334442672, loc:(*time.Location)(0x2703080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf2a2e0639406264, ext:334442672, loc:(*time.Location)(0x2703080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Unauthorized' (will not retry!)
E0430 12:33:28.970939 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Unauthorized
E0430 12:33:28.971106 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Unauthorized
E0430 12:33:29.977038 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Unauthorized
E0430 12:33:29.979890 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Unauthorized
E0430 12:33:30.980098 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Unauthorized
now, I've created a new ClusterRoleBinding this way:
$ kubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy
and if I describe the ClusterRole, I can see this:
$ kubectl describe clusterrole system:node-proxier
Name: system:node-proxier
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
events [] [] [create patch update]
nodes [] [] [get]
endpoints [] [] [list watch]
services [] [] [list watch]
so the user "system:kube-proxy" should be able to list the endpoints and the services, right? Now, if I print the YAML file of the kube-proxy daemonSet, I get his:
$ kubectl get configmap kube-proxy -n kube-system -o yaml
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
winkernel:
enableDSR: false
networkName: ""
sourceVip: ""
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority:
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://10.0.1.1:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
creationTimestamp: "2019-03-21T10:34:03Z"
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: "4458115"
selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy
uid: d8a454fb-4bc4-11e9-b0b4-00155d044109
I can see that "user: default" that confuses me... which user is it trying to authenticate with? is there an actual user named "default"?
thank you very much!
output from kubectl get po -n kube-system
$ kubectl get po - n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-27qck 0/1 Pending 0 7d15h
coredns-fb8b8dccf-dd6bh 0/1 Pending 0 7d15h
kube-apiserver-fh-ubuntu01 1/1 Running 1 7d15h
kube-controller-manager-fh-ubuntu01 1/1 Running 0 7d15h
kube-proxy-xjxck 1/1 Running 0 43h
kube-scheduler-fh-ubuntu01 1/1 Running 1 7d15h
weave-net-psqh5 1/2 CrashLoopBackOff 2144 7d15h
cluster health look healthy
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-3 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Run below command to check cluster health
kubectl get cs
Then check status of control plane services
kubectl get po -n kube-system
Issue seems to be with weave-net-psqh5 pod. find out why it was getting into CrashLoop status.
share the logs from weave-net-psqh5.

Get ServiceUnavailable from `kubectl top` with heapster

I have a managed kubernetes setup, called Cluster Container Engine (CCE), in the Open Telekom Cloud. Their documentation can be found online.
My CCE has one master and three nodes which run k8s version 1.9.2 (more details below). I can access the CCE through kubectl and deploy new pods onto it.
The CCE has a deployment of heapster preinstalled. However, attempting to inspect node resource usage fails (I can observe the same effect for pod usage):
$ kubectl top pods
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)
I've attempted all debugging steps I could think of (see below) and I'm still lost when it comes to fixing this. Any advice?
The deployment, pod and service items for heapster are present (outputs filtered to include only heapster):
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
heapster-apiserver-84b844ffcf-lzh4b 1/1 Running 0 47m
$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.247.150.244 <none> 80/TCP 19d
$ kubectl get deploy -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
heapster-apiserver 1/1 1 1 19d
To check that heapster does indeed collect metrics properly, I've ssh'd into one of the nodes and executed:
$ curl -k http://10.247.150.244:80/api/v1/model/metrics/
[
"cpu/usage_rate",
"memory/usage",
"cpu/request",
"cpu/limit",
"memory/request",
"memory/limit"
]
Pod Log Output
Finally, I checked the log output from the heapster-apiserver-84b844ffcf-lzh4b pod:
$ kubectl logs -n kube-system heapster-apiserver-84b844ffcf-lzh4b
I0311 13:38:18.334525 1 heapster.go:78] /heapster --source=kubernetes.summary_api:''?kubeletHttps=true&inClusterConfig=false&insecure=true&auth=/srv/config --api-server --secure-port=6443
I0311 13:38:18.334718 1 heapster.go:79] Heapster version v1.5.3
I0311 13:38:18.340912 1 configs.go:61] Using Kubernetes client with master "https://192.168.1.228:5443" and version <nil>
I0311 13:38:18.340996 1 configs.go:62] Using kubelet port 10255
I0311 13:38:18.358918 1 heapster.go:202] Starting with Metric Sink
I0311 13:38:18.510751 1 serving.go:327] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
E0311 13:38:18.540860 1 heapster.go:128] Could not create the API server: missing clientCA file
I0311 13:38:18.558944 1 heapster.go:112] Starting heapster on port 8082
Cluster Info
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.2-CCE2.0.7-B003", GitCommit:"302f471a1e2caa114c9bb708c077fbb363aa2f13", GitTreeState:"clean", BuildDate:"2018-06-20T03:27:16Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
192.168.1.163 Ready worker 19d v1.9.2-CCE2.0.7-B003
192.168.1.211 Ready nfs-server 19d v1.9.2-CCE2.0.7-B003
192.168.1.227 Ready worker 19d v1.9.2-CCE2.0.7-B003
All nodes use EulerOS_2.0_SP2 with kernel version 3.10.0-327.59.59.46.h38.x86_64.
I0311 13:38:18.510751 1 serving.go:327] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
It seems like your API server is running on HTTP but heapster has a https url configured. You need to set the --source parameter to override the Kubernetes master like described here:
--source=kubernetes:http://master-ip?inClusterConfig=false&useServiceAccount=true&auth=
BTW: heapster has been deprecated and it is advised to switch to metrics server.

How to resolve Kubernetes API from minikube?

I can't seem to be able to find the Kubernetes API IP address from within the minikube machine, using minikube ssh.
nslookup kubernetes.default.svc.cluster.local 10.0.0.1
Server: 10.0.0.1
Address 1: 10.0.0.1
nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
Same thing happens with just kubernetes. This happens even though it seems that the kube-dns pods are up-and-running just fine:
# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 0 9m
kube-dns-910330662-16bvw 3/3 Running 0 8m
kubernetes-dashboard-vbn9m 1/1 Running 0 8m
There is no KubeDNS info in cluster-info (but nothing major in the dump):
# kubectl cluster-info
Kubernetes master is running at https://192.168.64.26:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Version for kubectl:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-07-26T00:12:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
and minikube:
# minikube version
minikube version: v0.21.0
This happens on OSX and both with xhyve and virtualbox as drivers.
There are quite a few minikube/dns issues in the minikube issue tracker, but none of them quite like this. What should the kubectl cluster-info output look like for minikube that has kube-dns working? What about the nslookup, shouldn't that work?
EDIT. It seems like kubectl cluster-info for minikube is returning what it should, but I can't even resolve kubernetes from inside the kube-dns pod either:
# kubectl -n kube-system exec -it kube-dns-910330662-1n64f -c kubedns -- nslookup kubernetes.default.svc.cluster.local localhost
Server: 127.0.0.1
Address 1: 127.0.0.1 localhost
nslookup: can't resolve 'kubernetes.default.svc.cluster.local': Name does not resolve
which really should work?