After deploying Istio bookinfo application on EKS, `kubectl get svc` displays the app's services but `Kubectl get pods` returns `no resources found` - kubernetes

I installed Istio on my EKS cluster and installed bookinfo from samples.
$ sudo Kubectl apply -f /samples/bookinfo/platform/kube/bookinfo.yaml
After installation, I am able to see the services but not the pods for those services
$ sudo Kubectl get services
NAME. TYPE
productpage ClusterIP.
ratings. ClusterIP
reviews. ClusterIP
But the pods in the above services are not to be seen
$ sudo Kubectl get pods
No resources found in default namespace
Any idea why I can view the services but not the pods in those services installed by booking app?

I've verified the bookinfo app with istio 1.9.3 and it works correctly.
I went to the istio 1.9.3 directory with the following command cd istio-1.9.3 and used kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml to install the bookinfo application.
kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-66b6955995-q2nwh 2/2 Running 0 44s
productpage-v1-5d9b4c9849-lhc2b 2/2 Running 0 44s
ratings-v1-fd78f799f-t8gkp 2/2 Running 0 43s
reviews-v1-6549ddccc5-jv2tg 2/2 Running 0 43s
reviews-v2-76c4865449-wjkxx 2/2 Running 0 43s
reviews-v3-6b554c875-9gsnd 2/2 Running 0 42s
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.112.2.127 <none> 9080/TCP 81s
kubernetes ClusterIP 10.112.0.1 <none> 443/TCP 6m41s
productpage ClusterIP 10.112.5.110 <none> 9080/TCP 75s
ratings ClusterIP 10.112.1.157 <none> 9080/TCP 79s
reviews ClusterIP 10.112.1.106 <none> 9080/TCP 78s
As you can see both pods and services were deployed correctly.
I would just recommend to redeploy the bookinfo application with the newest version and it should work.
Also you can use the raw.githubusercontent.com instead of the samples directory to deploy it. You can find more about that on istio documentation.

Related

What is the correct prometheus URL to be used by prometheus-adapter

I have successfully deployed
prometheus via helm chart kube-prometheus-stack (https://prometheus-community.github.io/helm-charts)
prometheus-adapter via helm chart prometheus-adapter (https://prometheus-community.github.io/helm-charts)
using default configuration with slight customization.
I can access prometheus, grafana and alertmanager, query metrics and see fancy charts.
But prometheus-adapter keeps complaining on startup that it can't access/discover metrics:
I0326 08:16:52.266095 1 adapter.go:98] successfully using in-cluster auth
I0326 08:16:52.330094 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/run/serving-cert/tls.crt::/var/run/serving-cert/tls.key"
E0326 08:16:52.334710 1 provider.go:227] unable to update list of all metrics: unable to fetch metrics for query "{namespace!=\"\",__name__!~\"^container_.*\"}": bad_response: unknown response code 404
I've tried various prometheus URLs in the prometheus-adapter Deployment command line argument but the problem is more or less the same.
E.g. some of the URLs I've tried are
--prometheus-url=http://prometheus-operated.prom.svc:9090
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090
There are the following services / pods running:
$ kubectl -n prom get pods
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 16h
prometheus-adapter-76fcc79b7b-7xvrm 1/1 Running 0 10m
prometheus-grafana-559b79b564-bh85n 2/2 Running 0 16h
prometheus-kube-prometheus-operator-8556f58759-kl84l 1/1 Running 0 16h
prometheus-kube-state-metrics-6bfcd6f648-ms459 1/1 Running 0 16h
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 16h
prometheus-prometheus-node-exporter-2x6mt 1/1 Running 0 16h
prometheus-prometheus-node-exporter-bns9n 1/1 Running 0 16h
prometheus-prometheus-node-exporter-sbcjb 1/1 Running 0 16h
$ kubectl -n prom get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 16h
prometheus-adapter ClusterIP 10.0.144.45 <none> 443/TCP 16h
prometheus-grafana ClusterIP 10.0.94.160 <none> 80/TCP 16h
prometheus-kube-prometheus-alertmanager ClusterIP 10.0.0.135 <none> 9093/TCP 16h
prometheus-kube-prometheus-operator ClusterIP 10.0.170.205 <none> 443/TCP 16h
prometheus-kube-prometheus-prometheus ClusterIP 10.0.250.223 <none> 9090/TCP 16h
prometheus-kube-state-metrics ClusterIP 10.0.135.215 <none> 8080/TCP 16h
prometheus-operated ClusterIP None <none> 9090/TCP 16h
prometheus-prometheus-node-exporter ClusterIP 10.0.70.247 <none> 9100/TCP 16h
kubectl -n kube-system get deployment/metrics-server
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 15d
Prometheus-adapter helm chart gets deployed using the following values:
prometheus:
url: http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local
certManager:
enabled: true
What is the correct value for --prometheus-url for prometheus-adapter in my setup ?
The problem is related to the additional path used to expose prometheus via Ingress.
I was using additional path prefix /monitoring/prometheus/ for my ingress configuration.
The solution is to tell prometheus-adapter too, that prometheus is accessible incl. this path prefix.
Thus the following makes prometheus-adapter happy :
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090/monitoring/prometheus/
And now I can see custom metrics when executing
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Thank you "rock'n rolla" for giving some hints !
I'm using both helm charts (kube-prometheus-stack and prometheus-adapter).
additional path prefix that works for me is "/", but, prometheus url must be with the name of your helm-install parameter for stack ("helm install "). I'm using "prostack" as stack name. So finally, it works for me:
helm install <adapter-name> -n <namespace> --set prometheus.url=http://prostack-kube-prometheus-s-prometheus.monitoring.svc.cluster.local --set prometheus.port=9090 --set prometheus.path=/
I struggled with the same issue.
When i installed my prometheus server with Helm via the community chart.
I got a message like this:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
my-tag-prometheus-server.carbon.svc.cluster.local
Please note that is says the service is accessible on port 80 and not 9090 which i had not noticed.
So in my values.yaml file for the prometheus-adapter chart i specified port 80 and it worked (instead of the standard 9090)
28 # Url to access prometheus
29 prometheus:
30 # Value is templated
31 url: http://my-tag-prometheus-server.carbon.svc.cluster.local
32 port: 80
33 path: ""

im facing this error in kubernetes using minikube

I tried to deploy nginx server using kubernetes. I was able to create deployment and thn create service. But when i gave the curl command im facing an error. Im not able to curl and open nginx webpage in browser.
Below are the commands i used and error i got.
kubectl get pods
NAME READY STATUS RESTARTS AGE
curl 1/1 Running 8 15d
curl-deployment-646445496f-59fs9 1/1 Running 7 15d
hello-5d448ffc76-cwzcl 1/1 Running 13 23d
hello-node-7567d9fdc9-ffdkx 1/1 Running 8 20d
my-nginx-5b6fb7fb46-bdzdq 0/1 ContainerCreating 0 15d
mytestwebapp 1/1 Running 10 21d
nginx-6799fc88d8-w76cb 1/1 Running 5 13d
nginx-deployment-66b6c48dd5-9mkh8 1/1 Running 12 23d
nginx-test-795d659f45-d9shx 1/1 Running 4 13d
rss-site-7b6794856f-9586w 2/2 Running 40 15d
rss-site-7b6794856f-z59vn 2/2 Running 78 21d
jit#jit-Vostro-15-3568:~$ kubectl logs webserver
Error from server (NotFound): pods "webserver" not found
jit#jit-Vostro-15-3568:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.104.134.171 <pending> 8080:31733/TCP 13d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
my-nginx NodePort 10.103.114.92 <none> 8080:32563/TCP,443:32397/TCP 15d
nginx NodePort 10.110.113.60 <none> 80:30985/TCP 13d
nginx-test NodePort 10.109.16.192 <none> 8080:31913/TCP 13d
jit#jit-Vostro-15-3568:~$ curl kube-worker-1:30985
curl: (6) Could not resolve host: kube-worker-1
As you can see you have pod called nginx, that indicates that you have had nginx server already deployed in pod on your cluster. You don't have pod called webserver that's why you're getting
Error from server (NotFound): pods "webserver" not found error.
Also to access nginx service try to pass curl it via ip:port:
$ curl 10.110.113.60:30985
If you point a web browser to http://IP_OF_NODE:ASSIGNED_PORT (where IP_OF_NODE is an IP address of one of your nodes and ASSIGNED_PORT is the port assigned during the create service command), you should see the NGINX Welcome page!
Take a look: nginx-app-kubernetes.
I tried the above scenario locally.
do a kubectl describe svc <svc-name>
check whether it have any end-points.
probably it doesn't have any endpoints

kubectl proxy not working on Ubuntu LTS 18.04

I've installed Kubernetes on ubuntu 18.04 using this article. Everything is working fine and then I tried to install Kubernetes dashboard with these instructions.
Now when I am trying to run kubectl proxy then the dashboard is not cumming up and it gives following error message in the browser when trying to access it using default kubernetes-dashboard URL.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
Following commands give this output where kubernetes-dashboard shows status as CrashLoopBackOff
$> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default amazing-app-rs-59jt9 1/1 Running 5 23d
default amazing-app-rs-k6fg5 1/1 Running 5 23d
default amazing-app-rs-qd767 1/1 Running 5 23d
default amazingapp-one-deployment-57dddd6fb7-xdxlp 1/1 Running 5 23d
default nginx-86c57db685-vwfzf 1/1 Running 4 22d
kube-system coredns-6955765f44-nqphx 0/1 Running 14 25d
kube-system coredns-6955765f44-psdv4 0/1 Running 14 25d
kube-system etcd-master-node 1/1 Running 8 25d
kube-system kube-apiserver-master-node 1/1 Running 42 25d
kube-system kube-controller-manager-master-node 1/1 Running 11 25d
kube-system kube-flannel-ds-amd64-95lvl 1/1 Running 8 25d
kube-system kube-proxy-qcpqm 1/1 Running 8 25d
kube-system kube-scheduler-master-node 1/1 Running 11 25d
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-kvz5d 1/1 Running 0 41m
kubernetes-dashboard kubernetes-dashboard-566f567dc7-w2sbk 0/1 CrashLoopBackOff 12 41m
$> kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ---------- <none> 443/TCP 25d
default nginx NodePort ---------- <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ---------- <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ---------- <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ---------- <none> 443/TCP 24d
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ====== <none> 443/TCP 25d
default nginx NodePort ====== <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ====== <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ====== <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ====== <none> 443/TCP 24d
$ kubectl get events -n kubernetes-dashboard
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Pulling pod/kubernetes-dashboard-566f567dc7-w2sbk Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s Warning BackOff pod/kubernetes-dashboard-566f567dc7-w2sbk Back-off restarting failed container
$ kubectl describe services kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.96.241.62
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints:
Session Affinity: None
Events: <none>
$ kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard
> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
>
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
> /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212
Any suggestions to fix this? Thanks in advance.
I noticed that the guide You used to install kubernetes cluster is missing one important part.
According to kubernetes documentation:
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here .
Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
For more information about flannel, see the CoreOS flannel repository on GitHub .
To fix this:
I suggest using the command:
sysctl net.bridge.bridge-nf-call-iptables=1
And then reinstall flannel:
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Update: After verifying the the /proc/sys/net/bridge/bridge-nf-call-iptables value is 1 by default ubuntu-18-04-lts. So issue here is You need to access the dashboard locally.
If You are connected to Your master node via ssh. It could be possible to use -X flag with ssh in order to launch we browser via ForwardX11. Fortunately ubuntu-18-04-lts has it turned on by default.
ssh -X server
Then install local web browser like chromium.
sudo apt-get install chromium-browser
chromium-browser
And finally access the dashboard locally from node.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Hope it helps.

Kong Ingress Controller at Home

I'm learning about Kubernetes and ingress controllers but I'm stucked getting this error when I try to apply kong ingress manifest...
ingress-kong-7dd57556c5-bh687 0/2 Init:0/1 0 29s
kong-migrations-gzlqj 0/1 Init:0/1 0 28s
postgres-0 0/1 Pending 0 28s
Is it possible to run this ingress on my home server without minikube ? If so, how?
Note: I have a FQDN pointing to my home server.
I guess you run manifest from Github
Issues with Pods
I have reproduced your case. As you have 3 pods, you have used option with DB.
If you will describe pods using
$ kubectl describe pod <podname> -n kong
you will receive error output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7s (x4 over 17s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
You can also check job in kong namespace.
It is work correctly on fresh Minikube cluster, so I guess you might apply same changes to storageclass.
Is it possible to run this ingress on my home server without minikube ? If so, how?
You have to use Kubernetes to do it. Since Minikube is supporting LoadBalancer you can can use it in Home.
You can check this thread about FQDN. As mentioned:
The host machine should be able to resolve the name of that FQDN. You
might add a record into the /etc/hosts at the Mac host to achieve
that:
10.0.0.2 mydb.mytestdomain
But in your case it should be IP address of LoadBalancer, kong-proxy.
Obtain LoadBalancer IP in Minikube
If you will deploy everything correctly you can check your services.
$ kubectl get svc -n kong
You will see kong-proxy service with LoadBalancer type wit <pending> EXTERNAL-IP.
To obtain ExternalIP you have to use minikbue tunnel.
Please note that you need have $ sudo minikube tunnel run in one console whole time.
Before Minikube tunnel
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 <pending> 80:31881/TCP,443:31319/TCP 103m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 103m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 103m
After
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 10.110.218.74 80:31881/TCP,443:31319/TCP 104m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 104m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 104m
Testing Kong
Here you can find how to get start with Kong. It will show you how to create Ingress. Later as I mentioned you have to edit ingress and add rule (host) similar like in K8s docs.

Unable to get the metrics from heapster pod in Kubernetes Cluster

I am trying to get the metrics in Kubernetes dashboard. For that I'm running the influxdb and heapster pod in my kube-system namespace. I checked the status of pods using the command kubectl get pods -n kube-system. Here is the link which I was followed But heapster shows the logs as
E1023 13:41:07.915723 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: Get https://kubernetes.default/api/v1/nodes?resourceVersion=0: dial tcp: i/o timeout
Could anybody suggest where might be I will do the changes in my configurations?
Looks like the heapster cannot talk to you kube-apiserver through the kubernetes service on your default namespace. A few of things, you can try:
Check that the service is defined in the default namespace:
$ kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 92d
Check that all your kube-proxy pods are running ok:
$ kubectl -n kube-system -l=k8s-app=kube-proxy get pods
NAME READY STATUS RESTARTS AGE
kube-proxy-xxxxx 1/1 Running 0 4d18h
...
Check that all your overlay pods are running. For example for calico
$ kubectl -n kube-system -l=k8s-app=calico-node get pods
NAME READY STATUS RESTARTS AGE
calico-node-88fgd 2/2 Running 3 4d21h
...