GKE : How to get number of nodes and pods using API - kubernetes

Currently, I obtaine various information from the GoogleCloudPlatform management console screen, but in the future I would like to obtain it using API.
The information obtained is as follows.
Kubernetes Engine>Clusters>Cluster Size
Kubernetes Engine>Workloads>Pods
Please teach the API corresponding to each information acquisition.

GKE UI under the hood calls Kubernetes API to get information and show in UI.
You can use kubectl to query Kubernetes API to get that information.
kubectl get nodes
kubectl get pods
If you turn on the verbose mode in kubectl then it will show what REST API its calling on the kubernetes api server.
kubectl --v=8 get nodes
kubectl --v=8 get pods
The REST API for nodes and pods are
GET https://kubernetes-api-server-endpoint:6443/api/v1/nodes?limit=500
GET https://kubernetes-api-server-endpoint:6443/api/v1/namespaces/default/pods?limit=500
Here is the doc on how to configure Kubectl to connect with GKE.
Here is the doc from kubernetes on different ways to access Kubernetes API.
You can also use kubectl proxy for trying it out.
Remember to call above rest apis you need to authenticate to kubernetes api server either with a certificate or with a bearer token.

You need to:
install your command line
connect to your project
connect to your cluster
retrieve the number of pod inside your cluster
Install your command line
You can use your prefered command line or you can use the active cloud shell of your browser (the online command line interface integrated to Google Cloud Platform).
Option A) Using your own command line program, you need to install Google Cloud command (gcloud) on your machine.
Option B) Otherwise if you use the active cloud shell, just click on the active cloud shell button on the top of the page.
Connect to your project
(only for option A)
Login to your gcloud platform: gcloud auth login
$ gcloud auth login
Your browser has been opened to visit:
https://accounts.google.com/signin/oauth/oauthchooseaccount?client_id=65654645461.apps.googleusercontent.com&as=yJ_pR_9VSHEGFKSDhzpiw&destination=http%3A%2F%2Flocalhost%3A8085&approval_state=!ChRVVHYTE11IxY2FVbTIxb2xhbTk0SBIfczcxb2xyQ3hfSFVXNEJxcmlYbTVkb21pNVlhOF9CWQ%E2%88%99AJDr988AKKKKKky48vyl43SPBJ-gsNQf8w57Djasdasd&oauthgdpr=1&oauthriskyscope=1&xsrfsig=ChkAASDasdmanZsdasdNF9sDcdEftdfECwCAt5Eg5hcHByb3ZhbF9zdGF0ZRILZGVzdGluYXRpb24ASDfsdf1Eg9vYXV0aHJpc2t5c2NvcGU&flowName=GeneralOAuthFlow
Connect to your project: gcloud config set project your_project_id
$ gcloud projects list
PROJECT_ID NAME PROJECT_NUMBER
first-project-265905 My Project 117684542848
second-project-435504 test 895475526863
$ gcloud config set project first-project-265905
Connect to your cluster
Connected to your project, you need to connect to your cluster.
gcloud container clusters get-credentials your_cluster_name
$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
test-cluster-1 asia-northeast1-a 1.33.33-gke.24 45.600.23.72 f1-micro 1.13.11-gke.14 3 RUNNING
$ gcloud container clusters get-credentials test-cluster-1
Fetching cluster endpoint and auth data.
kubeconfig entry generated for test-cluster-1.
Retrieve the number of nodes/pods inside your cluster
inside a given name space run the command
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-test-cluster-1-default-pool-d85b49-2545 NotReady 24m v1.13.11-gke.14
gke-test-cluster-1-default-pool-d85b49-2dr0 NotReady 3h v1.13.11-gke.14
gke-test-cluster-1-default-pool-d85b49-2f31 NotReady 1d v1.13.11-gke.14
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 0/1 Pending 0 44s
nginx 0/1 Pending 0 1m

Speaking about Python, Kubernetes Engine API could be used in this case.
Kubernetes Engine > Clusters > Cluster Size
In particular, a method get(projectId=None, zone=None, clusterId=None, name=None, x__xgafv=None)
returns an object that contains "currentNodeCount" value.
Kubernetes Engine > Workloads > Pods
A code example for listing pods could be found here:
Access Clusters Using the Kubernetes API

Related

How to find MASTER_IP and MASTER_CLUSTER_IP in k8s?

I am following this guide and trying to create the TLS cert, I am using cfssl and I am able to create the required file, but what should I provide for MASTER_IP and MASTER_CLUSTER_IP?
When I execute kubectl cluster-info, I can only see the following information:
Kubernetes control plane is running at https://xxx.yy.zzz.40:6443
CoreDNS is running at https://xxx.yy.zzz.40:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Where can I find these two values?
Use below command
kubectl get no -owide
The above command displays internal and external ips of all the nodes in the cluster

kube-apiserver on OpenShift

I'm new to OpenShift and Kubernetes.
I need to access kube-apiserver on existing OpenShift environment
oc v3.10.0+0c4577e-1
kubernetes v1.10.0+b81c8f8
how do I know kube-apiserver is already installed, or how to get it installed?
I checked all the containers and there is no even such path /etc/kubernetes/manifests.
Here is the list of docker processes on all clusters, could it hide behind one of these?
k8s_fluentd-elasticseark8s_POD_logging
k8s_POD_tiller-deploy
k8s_api_master-api-ip-...ec2.internal_kube-system
k8s_etcd_master-etcd-...ec2.internal_kube-system
k8s_POD_master-controllers
k8s_POD_master-api-ip-
k8s_POD_kube-state
k8s_kube-rbac-proxy
k8s_POD_node-exporter
k8s_alertmanager-proxy
k8s_config-reloader
k8s_POD_alertmanager_openshift-monitoring
k8s_POD_prometheus
k8s_POD_cluster-monitoring
k8s_POD_heapster
k8s_POD_prometheus
k8s_POD_webconsole
k8s_openvswitch
k8s_POD_openshift-sdn
k8s_POD_sync
k8s_POD_master-etcd
If you just need to verify that the cluster is up and running then you can simply run oc get nodes which communicates with the kube-apiserver to retrieve information.
oc config view will show where kube-apiserver is hosted under the clusters -> cluster -> server section. On that host machine you can run command docker ps to display the running containers, which should include the kube-apiserver

How to access GKE kubectl proxy dashboard?

I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.
I tried this command to get the token and entered it in:
gcloud container clusters get-credentials mycluster
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
kubectl proxy
And it shows some things, but not others (services are missing, says it's forbidden).
How do I use kubectl proxy or show that dashboard with GKE?
Provided you are authenticated with gcloud auth login and the current project and k8s cluster is configured to the one you need, authenticate kubectl to the cluster (this will write ~/.kube/config):
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
retrieve the auth token that the kubectl itself uses to authenticate as you
gcloud config config-helper --format=json | jq -r '.credential.access_token'
run
kubectl proxy
Then open a local machine web browser on
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
(This will only work if you checked the checkbox Deploy Dashboard in GCP console)
and use the token from the second command to log in with your Google Account's permissions.
The Dashboard is disabled and deprecated in GKE as of September 2017. GKE provides a built in dashboard through the Management Console GUI.
You can disable it from the Google Cloud Console (UI).
Edit your cluster
Go to "Add-ons" section
Find "Kubernetes dashboard"
Chose "disabled" from dropdown
Save it.
Also according to the documentation this thing will be removed starting GKE 1.15
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. It is recommended to use the alternative GCP Console dashboards described on this page.
At the time of writing, the dashboard is not deployed by default (neither in the standard Kubernetes distribution, nor as part of a GKE cluster). In order to get it up and running, you have to first follow the instructions from the Kubernetes site, summarized here:
Within the proper kubectl context, run the following: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (nb: this url is obviously subject to change, so do check the official site to obtain the most recent version).
Then do what #Alexander outlines:
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
gcloud config config-helper --format=json
kubectl proxy
You'll be prompted for either the access token displayed in the second step or a kubeconfig file:
Pasting the access token in the field provided will gain you entry to the dashboard.

K8S dashboard not accessible after first cluster in GKE - GCP using console

Newbie setup :
Created First project in GCP
Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
Deployed aan application in a pod, per example.
Able to access "hello world" and the hostname, using the external-ip and the port.
In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.
Questions :
How can anyone access the kubernetes-dashboard of the cluster created in console?
docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
tutorial says run "kubectl proxy" and then to open
"http://localhost:8001/ui", but it doesnt work, why?
If you create a cluster with with version 1.9.x or greater, then u can access using tokens.
get secret.
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Copy secret.
kubectl proxy.
Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.
hope this helps
It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.
The solution is to access the dashboard at this endpoint instead:
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Github Issue links:
https://github.com/kubernetes/dashboard/issues/2368
https://github.com/kubernetes/kubernetes/issues/52729
The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.
The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.
When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.
First :
gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project
Then find your kubernetes dashboard endpoint doing :
kubectl cluster-info
It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Install kube-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Run:
$ kubectl proxy
Access:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Kubernetes unable to pull images from gcr.io

I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html
I am trying to get the kubernetes addons running , specifically the kube-ui. I created the service and replication controller like so:
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
When i run
kubectl get events --namespace=kube-system
I see errors such as this:
Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Authentication is required.)
How am i supposed to tell kubernetes to authenticate? This isnt covered in the documentation. So how do i fix this?
This happened due to a recent outage to gce storage as a result of which all of us went through this error while pulling images from gcr (which uses gce storage on the backend).
Are you still seeing this error ?
as the message says, you need credentials. Are you using Google Container Engine? Then you need to run
gcloud config set project <your-project>
gcloud config set compute/zone <your-zone, like us-central1-f>
gcloud beta container clusters get-credentials --cluster <your-cluster-name>
then your GCE cluster will have the credentials