Getting kubernetes config file using google-cloud API - kubernetes

I'm able to play around with google cloud's kubernetes API like this:
import os
import time
import json
from pprint import pprint
from google.oauth2 import service_account
import googleapiclient.discovery
from six.moves import input
# https://developers.google.com/identity/protocols/oauth2/scopes
scopes = [
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/compute'
]
credentials = service_account.Credentials.from_service_account_file(
'service_account.json',
scopes = scopes
)
container = googleapiclient.discovery.build('container', 'v1', credentials = credentials)
loc = container.projects().locations()
client = loc.getServerConfig(name="projects/MY_PROJECT/locations/europe-west1-b")
client.execute()
However, I'd like to achieve the equivalent of
gcloud container clusters get-credentials MY_CLUSTER --zone=europe-west1-b --project MY_PROJECT
i.e. get the complete kubernetes config+autorization file (which I can then use with kubernetes python module)
When looking at the API
https://cloud.google.com/kubernetes-engine/docs/reference/rest
It seems to be missing that get-credentials call? Or am I at the wrong API?

Google Cloud uses a short lived token (about 10 seconds) and uses gcloud tools to refresh/obtain the token.
If you want to create a long lived token, you can create a service account here https://console.cloud.google.com/iam-admin/serviceaccounts with the role "Kubernetes engine developer" and download the JSON file. Configure your kubeconfig to use gcp auth provider, for example
[{name: user-1, user: {auth-provider: {name: gcp}}}]
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the absolute path to the JSON file downloaded for the service account. Works with kubectl as it has special support for it.
If you want to use it with f.e. python you need to obtain the token from the serviceaccount
kubectl describe serviceaccount myserviceaccount
kubectl describe secrets [secret-name]
This can be used in the library
config.load_kube_config()
client.configuration.api_key['authorization'] = 'your token goes here'
client.configuration.api_key_prefix['authorization'] = 'Bearer'
Note that long lived credentials must be guarded especially well.

Related

error in add-iam-policy-binding to ESP end point service GCloud

I am trying to create an end point for an API to be deployed into existing GKE cluster by following the instructions in Getting started with Cloud Endpoints for GKE with ESPv2
I clone the sample code in the repo and modified the content of openapi.yaml:
# [START swagger]
swagger: "2.0"
info:
description: "A simple Google Cloud Endpoints API example."
title: "Endpoints Example"
version: "1.0.0"
host: "my-api.endpoints.my-project.cloud.goog"
I then deployed it via the command:
endpoints/getting-started (master) $ gcloud endpoints services deploy openapi.yaml
Now I can see that it has been created:
$ gcloud endpoints services list
NAME TITLE
my-api.endpoints.my-project.cloud.goog
I also have postgreSQL service account:
$ gcloud iam service-accounts list
DISPLAY NAME EMAIL DISABLED
my-postgresql-service-account my-postgresql-service-acco#my-project.iam.gserviceaccount.com False
In the section Endpoint Service Configuration of documentation it says to add the role to the attached service account for the endpoint service as follows, but I get this error:
$ gcloud endpoints services add-iam-policy-binding my-api.endpoints.my-project.cloud.goog
--member serviceAccount:my-postgresql-service-acco#my-project.iam.gserviceaccount.com
--role roles/servicemanagement.serviceController
ERROR: (gcloud.endpoints.services.add-iam-policy-binding) User [myusername#mycompany.com] does not have permission to access services instance [my-api.endpoints.my-project.cloud.goog:getIamPolicy] (or it may not exist): No access to resource: services/my-api.my-project.cloud.goog
The previous lines show the service exits, I guess? Now I am not sure how to resolve this? What permissions do I need? who can give me permission and what permissions he should have? how can I check? Is there any other solution?
The issue got resolved after I was assigned the role of "Project_Admin". It was not ideal as it was giving too much permission to me. The role "roles/endpoints.portalAdmin" was also tried but did not help.

Custom password for kubernetes dashboard when using eks

Is it possible to configure a custom password for the Kubernetes dashboard when using eks without customizing "kube-apiserver"?
This URL mentions changes in "kube-apiserver"
https://techexpert.tips/kubernetes/kubernetes-dashboard-user-authentication/
In K8s, requests come as Authentication and Authorization (so the API server can determine if this user can perform the requested action). K8s dont have users, in the simple meaning of that word (Kubernetes users are just strings associated with a request through credentials). The credential strategy is a choice you make while you install the cluster (you can choose from x509, password files, Bearer tokens, etc.).
Without API K8s server automatically falls back to an anonymous user and there is no way to check if provided credentials are valid.
You can do something like : not tested
Create a new credential using OpenSSL
export NEW_CREDENTIAL=USER:$(echo PASSWORD | openssl passwd -apr1
-noverify -stdin)
Append the previously created credentials to
/opt/bitnami/kubernetes/auth.
echo $NEW_CREDENTIAL | sudo tee -a /opt/kubernetes/auth
Replace the cluster basic-auth secret.
kubectl delete secret basic-auth -n kube-system
kubectl create secret generic basic-auth --from-file=/opt/kubernetes/auth -n kube-system

Can't connect to GCS bucket from Python despite being logged in

I have a GCS bucket set up that contains data that I want to access remotely. As per the instructions, I have logged in via gcloud auth login, and have confirmed that I have an active, credentialed account via gcloud auth list. However, when I try to access my bucket (using the Python google.cloud.storage API), I get the following:
HttpError: Anonymous caller does not have storage.objects.list access to <my-bucket-name>.
I'm not sure why it is being accessed anonymously, since I am clearly logged in. Is there something obvious I am missing?
The Python GCP library (and others) uses another authentication mechanism than the gcloud command.
Follow this guide to set up your environment and have access to GCS with Python.
gcloud aut login sets up the gcloud command tool with your credentials.
However, the way forward when executing code, is to have a Service Account. Then, when the env. variable GOOGLE_APPLICATION_CREDENTIALS has been set. Python will use the Service Account credentials
Edit
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path_to_your_.json_credential_file"
Edit
And then, to download gs://my_bucket/my_file.csv to a file: (from the python-docs-samples)
download_blob('my_bucket', 'my_file.csv', 'local/path/to/file.csv')
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print('Blob {} downloaded to {}.'.format(
source_blob_name,
destination_file_name))

Authenticating to GKE master in Python

I need to authenticate to a Kubernetes cluster provisioned in GKE using the Kubernetes Python client and the Google Cloud python client. I would prefer not to shell out to gcloud for several reasons:
relying on the system shell gcloud in a Python script when I have a native Google Cloud library is inelegant
it requires the system to have gcloud
I would have to switch users to the relevant ServiceAccount and switch back
It incurs the cost of starting/joining another process
As such, the workflow of gcloud container clusters get-credentials (which delegates to gcloud config config-helper) will not suffice to get me the API key I need. How do I get the equivalent output with the Google Cloud Python API?
Here is what I have so far:
import kubernetes.client
import googleapiclient.discovery
import base64
# get the cluster object from GKE
gke = googleapiclient.discovery.build('container', 'v1', credentials=config['credentials'])
name = f'projects/{config["project_id"]}/locations/{config["location"]}/{parent}/clusters/{config["name"]}'
gke_clusters = gke.projects().locations().clusters()
gke_cluster = gke_clusters.get(name=name).execute()
# set up Kubernetes Config
kube_config = kubernetes.client.Configuration()
kube_config.host = 'https://{0}/'.format(gke_cluster['endpoint'])
kube_config.verify_ssl = True
#kube_config.api_key['authenticate'] = "don't know what goes here"
# regretably, the Kubernetes client requires `ssl_ca_cert` to be a path, not the literal cert, so I will write it here.
kube_config.ssl_ca_cert = 'ssl_ca_cert'
with open(kube_config.ssl_ca_cert, 'wb') as f:
f.write(base64.decodestring(gke_cluster['masterAuth']['clusterCaCertificate'].encode()))
# use Kubernetes client to do something
kube_client = kubernetes.client.ApiClient(configuration=kube_config)
kube_v1 = kubernetes.client.CoreV1Api(kube_client)
kube_v1.list_pod_for_all_namespaces(watch=False)
Below is a solution that pulls the access token out of the googleapiclient, rather than copy-pasting things manually.
import googleapiclient.discovery
from tempfile import NamedTemporaryFile
import kubernetes
import base64
def token(*scopes):
credentials = googleapiclient._auth.default_credentials()
scopes = [f'https://www.googleapis.com/auth/{s}' for s in scopes]
scoped = googleapiclient._auth.with_scopes(credentials, scopes)
googleapiclient._auth.refresh_credentials(scoped)
return scoped.token
def kubernetes_api(cluster):
config = kubernetes.client.Configuration()
config.host = f'https://{cluster["endpoint"]}'
config.api_key_prefix['authorization'] = 'Bearer'
config.api_key['authorization'] = token('cloud-platform')
with NamedTemporaryFile(delete=False) as cert:
cert.write(base64.decodebytes(cluster['masterAuth']['clusterCaCertificate'].encode()))
config.ssl_ca_cert = cert.name
client = kubernetes.client.ApiClient(configuration=config)
api = kubernetes.client.CoreV1Api(client)
return api
def run(cluster):
"""You'll need to give whichever account `googleapiclient` is using the
'Kubernetes Engine Developer' role so that it can access the Kubernetes API.
`cluster` should be the dict you get back from `projects.zones.clusters.get`
and the like"""
api = kubernetes_api(cluster)
print(api.list_pod_for_all_namespaces())
Figuring this out took longer than I care to admit. #Ivan's post helped a lot.
In order to authenticate to a GKE cluster, you can use a service account to connect to a project and then a generated secret key from GKE to authenticate to a cluster. Here are the steps:
Create a service account in GCP. Go to IAM > Service Accounts > create a service account. Give it a Project Owner role. Once SA is created, create a key and download it as json.
Upload key.json to a folder where you have .py script
Get API_TOKEN. This is your main question, you can get it by reading a token file:
First run kubectl get secrets
You will get ‘default-token-xxxxx’
run kubectl describe secrets default-token-xxxxx (replace xxxxx with your token name).
The token parameter displayed is your “API-KEY”. Copy it inside your script.
Creating a script. It is a bit different then yours for few reasons: you need to authenticate to a project first with a service account, then you need to pass the api_token, but also you need to get SSL certificate when authenticating to GKE master.
import base64, pprint
import kubernetes.client
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file("key.json")
gke = googleapiclient.discovery.build('container', 'v1', credentials=credentials)
name = 'projects/your_project/locations/your_zone/clusters/your_gke_cluster'
gke_clusters = gke.projects().locations().clusters()
gke_cluster = gke_clusters.get(name=name).execute()
kube_config = kubernetes.client.Configuration()
kube_config.host = 'https://{}'.format(gke_cluster['endpoint'])
kube_config.verify_ssl = True
kube_config.api_key['authorization'] = 'your_api_token'
kube_config.api_key_prefix['authorization'] = 'Bearer'
kube_config.ssl_ca_cert = 'ssl_ca_cert'
with open(kube_config.ssl_ca_cert, 'wb') as f:
f.write(base64.decodestring(gke_cluster['masterAuth']['clusterCaCertificate'].encode()))
kube_client = kubernetes.client.ApiClient(configuration=kube_config)
kube_v1 = kubernetes.client.CoreV1Api(kube_client)
pprint.pprint(kube_v1.list_pod_for_all_namespaces())
Specific fields:
your_project - from GCP
your _zone - where gke cluster is created
your _gke_cluster - GKE cluster name
your_api_key - what you get in step 3.
This should be enough to authenticate you to a GKE cluster.

Adding roles to service accounts on Google Cloud Platform using REST API

I want to create a service account on GCP using a python script calling the REST API and then give it specific roles - ideally some of these, such as roles/logging.logWriter.
First I make a request to create the account which works fine and I can see the account in Console/IAM.
Second I want to give it the role and this seems like the right method. However, it is not accepting roles/logging.logWriter, saying HttpError 400, "Role roles/logging.logWriter is not supported for this resource.">
Conversely, if I set the desired policy in console, then try the getIamPolicy method (using the gcloud tool), all I get back is response etag: ACAB, no mention of the actual role I set. Hence I think these roles refer to different things.
Any idea how to go about scripting a role/scope for a service account using the API?
You can grant permissions to a GCP service account in a GCP project without having to rewrite the entire project policy!
Use the gcloud projects add-iam-policy-binding ... command for that (docs).
For example, given the environment variables GCP_PROJECT_ID and GCP_SVC_ACC the following command grants all privileges in the container.admin role to the chosen service account:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${GCP_SVC_ACC} \
--role=roles/container.admin
To review what you've done:
$ gcloud projects get-iam-policy $GCP_PROJECT_ID \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:${GCP_SVC_ACC}"
Output:
ROLE
roles/container.admin
(or more roles, if those were granted before)
Notes:
The environment variable GCP_SVC_ACC is expected to contain the email notation for the service account.
Kudos to this answer for the nicely formatted readout.
You appear to be trying to set a role on the service account (as a resource). That's for setting who can use the service account.
If you want to give the service account (as an identity) a particular role on the project and its resources, see this method: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy