Instance attributes are not available in metadata - kubernetes

I'm setting up a new K8S Cluster (1.13.7-gke.8) on GKE and I want the Google cloud logging API to report properly namespace and instance names.
This is executed in a new GKE cluster with workload-identity enabled.
I started workload container to test the access to metadata service and these are the results:
kubectl run -it --generator=run-pod/v1 --image google/cloud-sdk --namespace prod --rm workload-identity-test
And from the container after executing:
curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/" -H "Metadata-Flavor: Google"
I expect the output of cluster-name, container-name, and namespace-id, but the actual output is only cluster-name.

I was getting the same but when I ran the following the metadata showed up:
gcloud beta container node-pools create [NODEPOOL_NAME] \
--cluster=[CLUSTER_NAME] \
--workload-metadata-from-node=EXPOSED
However, you will only get cluster-name from the metadada. For example,
root#workload-identity-test:/# curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/" -H "Metadata-Flavor: Google"
cluster-location
cluster-name
cluster-uid
configure-sh
created-by
disable-legacy-endpoints
enable-oslogin
gci-ensure-gke-docker
gci-update-strategy
google-compute-enable-pcid
instance-template
kube-env
kube-labels
kubelet-config
user-data
If you are looking at getting namespaces and containers, I suggest you look at talking directly to the Kubernetes API which essentially what the 'Workloads' tab on GKE does. I'm not really sure what you are trying to do with the 'Google cloud logging API' but maybe you can elaborate on a different question.

Related

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

How to get Kubernetes cluster name from K8s API using client-go

How to get Kubernetes cluster name from K8s API mentions that
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
(from within the cluster), or
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
(from outside the cluster), can be used to retrieve the cluster name. That works.
Is there a way to perform the same programmatically using the k8s client-go library? Maybe using the RESTClient()? I've tried but kept getting the server could not find the requested resource.
UPDATE
What I'm trying to do is to get the cluster-name from an app that runs either in a local computer or within a k8s cluster. the k8s client-go allows to initialise the clientset via in cluster or out of cluster authentication.
With the two commands mentioned at the top that is achievable. I was wondering if there was a way from the client-go library to achieve the same, instead of having to do kubectl or curl depending on where the service is run from.
The data that you're looking for (name of the cluster) is available at GCP level. The name itself is a resource within GKE, not Kubernetes. This means that this specific information is not available using the client-go.
So in order to get this data, you can use the Google Cloud Client Libraries for Go, designed to interact with GCP.
As a starting point, you can consult this document.
First you have to download the container package:
➜ go get google.golang.org/api/container/v1
Before you will launch you code you will have authenticate to fetch the data:
Google has a very good document how to achieve that.
Basically you have generate a ServiceAccount key and pass it in GOOGLE_APPLICATION_CREDENTIALS environment:
➜ export GOOGLE_APPLICATION_CREDENTIALS=sakey.json
Regarding the information that you want, you can fetch the cluster information (including name) following this example.
Once you do do this you can launch your application like this:
➜ go run main.go -project <google_project_name> -zone us-central1-a
And the result would be information about your cluster:
Cluster "tom" (RUNNING) master_version: v1.14.10-gke.17 -> Pool "default-pool" (RUNNING) machineType=n1-standard-2 node_version=v1.14.10-gke.17 autoscaling=false%
Also it is worth mentioning that if you run this command:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
You are also interacting with the GCP APIs and can go unauthenticated as long as it's run within a GCE machine/GKE cluster. This provided automatic authentication.
You can read more about it under google`s Storing and retrieving instance metadata document.
Finally, one great advantage of doing this with the Cloud Client Libraries, is that it can be launched externally (as long as it's authenticated) or internally within pods in a deployment.
Let me know if it helps.
If you're running inside GKE, you can get the cluster name through the instance attributes: https://pkg.go.dev/cloud.google.com/go/compute/metadata#InstanceAttributeValue
More specifically, the following should give you the cluster name:
metadata.InstanceAttributeValue("cluster-name")
The example shared by Thomas lists all the clusters in your project, which may not be very helpful if you just want to query the name of the GKE cluster hosting your pod.

error: failed to discover supported resources kubernetes google cloud platform

I was performing a practical where i was deploying a containerised sample application using kubernetes.
i was trying to run container on google cloud platform using kubernetes engine.But while deploying container using "kubectl run" command using google cloud shell.
its showing an error "error: failed to discover supported resources: Get https://35.240.145.231/apis/extensions/v1beta1: x509: certificate signed by unknown authority".
From Error, i can recollect that its because of "SSL Certificate" not authorised.
I even exported the config file resides at "$HOME/.kube/config". but still getting the same error.
please anyone help me understand the real issue behind this.
Best,
Swapnil Pawar
You may try following steps,
List all the available clusters,
$ gcloud container clusters list
Depending upon how you have configured the cluster, if the cluster location is configured for a specific zone then,
$ gcloud container clusters get-credentials <cluster_name> --ZONE <location>
or if the location is configured for a region then,
$ gcloud container clusters get-credentials <cluster_name> --REGION <location>
The above command will update your kubectl config file $HOME/.kube/config
Now, the tricky part.
If you have more than one cluster that you have configured, then your $HOME/.kube/config will have two or more entries. You can verify it by doing a cat command on the config file.
To select a particular context/cluster, you need to run the following commands
$ kubectl config get-contexts -o=name // will give you a list of available contexts
$ kubectl config use-context <CONTEXT_NAME>
$ kubectl config set-context <CONTEXT_NAME>
Now, you may run the kubectl run.

How to access GKE kubectl proxy dashboard?

I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.
I tried this command to get the token and entered it in:
gcloud container clusters get-credentials mycluster
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
kubectl proxy
And it shows some things, but not others (services are missing, says it's forbidden).
How do I use kubectl proxy or show that dashboard with GKE?
Provided you are authenticated with gcloud auth login and the current project and k8s cluster is configured to the one you need, authenticate kubectl to the cluster (this will write ~/.kube/config):
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
retrieve the auth token that the kubectl itself uses to authenticate as you
gcloud config config-helper --format=json | jq -r '.credential.access_token'
run
kubectl proxy
Then open a local machine web browser on
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
(This will only work if you checked the checkbox Deploy Dashboard in GCP console)
and use the token from the second command to log in with your Google Account's permissions.
The Dashboard is disabled and deprecated in GKE as of September 2017. GKE provides a built in dashboard through the Management Console GUI.
You can disable it from the Google Cloud Console (UI).
Edit your cluster
Go to "Add-ons" section
Find "Kubernetes dashboard"
Chose "disabled" from dropdown
Save it.
Also according to the documentation this thing will be removed starting GKE 1.15
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. It is recommended to use the alternative GCP Console dashboards described on this page.
At the time of writing, the dashboard is not deployed by default (neither in the standard Kubernetes distribution, nor as part of a GKE cluster). In order to get it up and running, you have to first follow the instructions from the Kubernetes site, summarized here:
Within the proper kubectl context, run the following: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (nb: this url is obviously subject to change, so do check the official site to obtain the most recent version).
Then do what #Alexander outlines:
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
gcloud config config-helper --format=json
kubectl proxy
You'll be prompted for either the access token displayed in the second step or a kubeconfig file:
Pasting the access token in the field provided will gain you entry to the dashboard.

Kubernetes unable to pull images from gcr.io

I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html
I am trying to get the kubernetes addons running , specifically the kube-ui. I created the service and replication controller like so:
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
When i run
kubectl get events --namespace=kube-system
I see errors such as this:
Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Authentication is required.)
How am i supposed to tell kubernetes to authenticate? This isnt covered in the documentation. So how do i fix this?
This happened due to a recent outage to gce storage as a result of which all of us went through this error while pulling images from gcr (which uses gce storage on the backend).
Are you still seeing this error ?
as the message says, you need credentials. Are you using Google Container Engine? Then you need to run
gcloud config set project <your-project>
gcloud config set compute/zone <your-zone, like us-central1-f>
gcloud beta container clusters get-credentials --cluster <your-cluster-name>
then your GCE cluster will have the credentials