Authenticating to GKE master in Python - kubernetes

I need to authenticate to a Kubernetes cluster provisioned in GKE using the Kubernetes Python client and the Google Cloud python client. I would prefer not to shell out to gcloud for several reasons:
relying on the system shell gcloud in a Python script when I have a native Google Cloud library is inelegant
it requires the system to have gcloud
I would have to switch users to the relevant ServiceAccount and switch back
It incurs the cost of starting/joining another process
As such, the workflow of gcloud container clusters get-credentials (which delegates to gcloud config config-helper) will not suffice to get me the API key I need. How do I get the equivalent output with the Google Cloud Python API?
Here is what I have so far:
import kubernetes.client
import googleapiclient.discovery
import base64
# get the cluster object from GKE
gke = googleapiclient.discovery.build('container', 'v1', credentials=config['credentials'])
name = f'projects/{config["project_id"]}/locations/{config["location"]}/{parent}/clusters/{config["name"]}'
gke_clusters = gke.projects().locations().clusters()
gke_cluster = gke_clusters.get(name=name).execute()
# set up Kubernetes Config
kube_config = kubernetes.client.Configuration()
kube_config.host = 'https://{0}/'.format(gke_cluster['endpoint'])
kube_config.verify_ssl = True
#kube_config.api_key['authenticate'] = "don't know what goes here"
# regretably, the Kubernetes client requires `ssl_ca_cert` to be a path, not the literal cert, so I will write it here.
kube_config.ssl_ca_cert = 'ssl_ca_cert'
with open(kube_config.ssl_ca_cert, 'wb') as f:
f.write(base64.decodestring(gke_cluster['masterAuth']['clusterCaCertificate'].encode()))
# use Kubernetes client to do something
kube_client = kubernetes.client.ApiClient(configuration=kube_config)
kube_v1 = kubernetes.client.CoreV1Api(kube_client)
kube_v1.list_pod_for_all_namespaces(watch=False)

Below is a solution that pulls the access token out of the googleapiclient, rather than copy-pasting things manually.
import googleapiclient.discovery
from tempfile import NamedTemporaryFile
import kubernetes
import base64
def token(*scopes):
credentials = googleapiclient._auth.default_credentials()
scopes = [f'https://www.googleapis.com/auth/{s}' for s in scopes]
scoped = googleapiclient._auth.with_scopes(credentials, scopes)
googleapiclient._auth.refresh_credentials(scoped)
return scoped.token
def kubernetes_api(cluster):
config = kubernetes.client.Configuration()
config.host = f'https://{cluster["endpoint"]}'
config.api_key_prefix['authorization'] = 'Bearer'
config.api_key['authorization'] = token('cloud-platform')
with NamedTemporaryFile(delete=False) as cert:
cert.write(base64.decodebytes(cluster['masterAuth']['clusterCaCertificate'].encode()))
config.ssl_ca_cert = cert.name
client = kubernetes.client.ApiClient(configuration=config)
api = kubernetes.client.CoreV1Api(client)
return api
def run(cluster):
"""You'll need to give whichever account `googleapiclient` is using the
'Kubernetes Engine Developer' role so that it can access the Kubernetes API.
`cluster` should be the dict you get back from `projects.zones.clusters.get`
and the like"""
api = kubernetes_api(cluster)
print(api.list_pod_for_all_namespaces())
Figuring this out took longer than I care to admit. #Ivan's post helped a lot.

In order to authenticate to a GKE cluster, you can use a service account to connect to a project and then a generated secret key from GKE to authenticate to a cluster. Here are the steps:
Create a service account in GCP. Go to IAM > Service Accounts > create a service account. Give it a Project Owner role. Once SA is created, create a key and download it as json.
Upload key.json to a folder where you have .py script
Get API_TOKEN. This is your main question, you can get it by reading a token file:
First run kubectl get secrets
You will get ‘default-token-xxxxx’
run kubectl describe secrets default-token-xxxxx (replace xxxxx with your token name).
The token parameter displayed is your “API-KEY”. Copy it inside your script.
Creating a script. It is a bit different then yours for few reasons: you need to authenticate to a project first with a service account, then you need to pass the api_token, but also you need to get SSL certificate when authenticating to GKE master.
import base64, pprint
import kubernetes.client
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file("key.json")
gke = googleapiclient.discovery.build('container', 'v1', credentials=credentials)
name = 'projects/your_project/locations/your_zone/clusters/your_gke_cluster'
gke_clusters = gke.projects().locations().clusters()
gke_cluster = gke_clusters.get(name=name).execute()
kube_config = kubernetes.client.Configuration()
kube_config.host = 'https://{}'.format(gke_cluster['endpoint'])
kube_config.verify_ssl = True
kube_config.api_key['authorization'] = 'your_api_token'
kube_config.api_key_prefix['authorization'] = 'Bearer'
kube_config.ssl_ca_cert = 'ssl_ca_cert'
with open(kube_config.ssl_ca_cert, 'wb') as f:
f.write(base64.decodestring(gke_cluster['masterAuth']['clusterCaCertificate'].encode()))
kube_client = kubernetes.client.ApiClient(configuration=kube_config)
kube_v1 = kubernetes.client.CoreV1Api(kube_client)
pprint.pprint(kube_v1.list_pod_for_all_namespaces())
Specific fields:
your_project - from GCP
your _zone - where gke cluster is created
your _gke_cluster - GKE cluster name
your_api_key - what you get in step 3.
This should be enough to authenticate you to a GKE cluster.

Related

Azure Terraform Unable to Set secret in KeyVault during deployment

I am facing a blocker that I don't seem to find a practical solution.
I am using azure terraform to create a storage account, and I would like, during the release pipeline, to be able to set the connection string of this storage account as a secret in an existing KeyVault.
So far I am able to retrieve secret from this KeyVault as I am using a managed identity which has the following permission upon the KeyVault:
key = get, list
secret = get, list and set
cert = get , list
the workflow process in my terraform is as follow:
Retrieve the KeyVault data:
data "azurerm_key_vault" "test" {
name = "test"
resource_group_name = "KeyVault-test"
}
Retrieve the user assigned identity data:
data "azurerm_user_assigned_identity" "example" {
name = "mng-identity-example"
resource_group_name = "managed-identity-example"
}
Once I have those 2 data, I tried to create the secret as follow:
resource "azurerm_key_vault_secret" "secretTest" {
key_vault_id = data.azurerm_key_vault.test.id
name = "secretTest"
value = azurerm_storage_account.storageaccount.primary_connection_string
}
Once I set the release pipeline to run this terraform, it does fail with the error Access Denied
Which is fully understandable as this terraform does not have permission to set or retrieve the secret.
And this is the part on which I am blocked.
If anyone can help understand how can I use my managed identity to set this secret?
I looked into terraform documentation but couldn't find any step or explanation.
Thank you so much for your help and time, and please if you need more info just ask me.
Please make sure that the service principal that you are using to login into Azure using Terraform has the same permission which you assigned to the managed identity .
provider "azurerm" {
features {}
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000" ## This Client ID needs to have the permissions in Keyvault access policy which you have provided to the managed identity.
client_secret = var.client_secret
tenant_id = "00000000-0000-0000-0000-000000000000"
}
OR
If You are using a Service Connection to connect the Devops Pipeline to Azure and use it in Terrafarom , then you need to provide that Devops service connection (service principal) the permissions in the access policy.

GKE Workload Identity PermissionDenied

I am trying to use Google's preferred "Workload Identity" method to enable my GKE app to securely access secrets from Google Secrets.
I've completed the setup and even checked all steps in the Troubleshooting section (https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity?hl=sr-ba#troubleshooting) but I'm still getting the following error in my logs:
Unhandled exception. Grpc.Core.RpcException:
Status(StatusCode=PermissionDenied, Detail="Permission
'secretmanager.secrets.list' denied for resource
'projects/my-project' (or it may not exist).")
I figured the problem was due to the node pool not using the correct service account, so I recreated it, this time specifying the correct service account.
The service account has the following roles added:
Cloud Build Service
Account Kubernetes Engine Developer
Container Registry Service Agent
Secret Manager Secret Accessor
Secret Manager Viewer
The relevant source code for the package I am using to authenticate is as follows:
var data = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
var request = new ListSecretsRequest
{
ParentAsProjectName = ProjectName.FromProject(projectName),
};
var secrets = secretManagerServiceClient.ListSecrets(request);
foreach(var secret in secrets)
{
var value = secretManagerServiceClient.AccessSecretVersion($"{secret.Name}/versions/latest");
string secretVal = this.manager.Load(value.Payload);
string configKey = this.manager.GetKey(secret.SecretName);
data.Add(configKey, secretVal);
}
Data = data;
Ref. https://github.com/jsukhabut/googledotnet
Am I missing a step in the process?
Any idea why Google is still saying "Permission 'secretmanager.secrets.list' denied for resource 'projects/my-project' (or it may not exist)?"
Like #sethvargo mentioned in the comments, you need to map the service account to your pod because Workload Identity doesn’t use the underlying node identity and instead maps a Kubernetes service account to a GCP service account. Everything happens at the per-pod level in Workload identity.
Assign a Kubernetes service account to the application and configure it to act as a Google service account.
1.Create a GCP service account with the required permissions.
2.Create a Kubernetes service account.
3.Assign the Kubernetes service account permission to impersonate the GCP
service account.
4.Run your workload as the Kubernetes service account.
Hope you are using project ID instead of project name in the project or secret.
You cannot update the service account of an already created pod.
Refer the link to add service account to the pods.

Getting kubernetes config file using google-cloud API

I'm able to play around with google cloud's kubernetes API like this:
import os
import time
import json
from pprint import pprint
from google.oauth2 import service_account
import googleapiclient.discovery
from six.moves import input
# https://developers.google.com/identity/protocols/oauth2/scopes
scopes = [
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/compute'
]
credentials = service_account.Credentials.from_service_account_file(
'service_account.json',
scopes = scopes
)
container = googleapiclient.discovery.build('container', 'v1', credentials = credentials)
loc = container.projects().locations()
client = loc.getServerConfig(name="projects/MY_PROJECT/locations/europe-west1-b")
client.execute()
However, I'd like to achieve the equivalent of
gcloud container clusters get-credentials MY_CLUSTER --zone=europe-west1-b --project MY_PROJECT
i.e. get the complete kubernetes config+autorization file (which I can then use with kubernetes python module)
When looking at the API
https://cloud.google.com/kubernetes-engine/docs/reference/rest
It seems to be missing that get-credentials call? Or am I at the wrong API?
Google Cloud uses a short lived token (about 10 seconds) and uses gcloud tools to refresh/obtain the token.
If you want to create a long lived token, you can create a service account here https://console.cloud.google.com/iam-admin/serviceaccounts with the role "Kubernetes engine developer" and download the JSON file. Configure your kubeconfig to use gcp auth provider, for example
[{name: user-1, user: {auth-provider: {name: gcp}}}]
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the absolute path to the JSON file downloaded for the service account. Works with kubectl as it has special support for it.
If you want to use it with f.e. python you need to obtain the token from the serviceaccount
kubectl describe serviceaccount myserviceaccount
kubectl describe secrets [secret-name]
This can be used in the library
config.load_kube_config()
client.configuration.api_key['authorization'] = 'your token goes here'
client.configuration.api_key_prefix['authorization'] = 'Bearer'
Note that long lived credentials must be guarded especially well.

Can't connect to GCS bucket from Python despite being logged in

I have a GCS bucket set up that contains data that I want to access remotely. As per the instructions, I have logged in via gcloud auth login, and have confirmed that I have an active, credentialed account via gcloud auth list. However, when I try to access my bucket (using the Python google.cloud.storage API), I get the following:
HttpError: Anonymous caller does not have storage.objects.list access to <my-bucket-name>.
I'm not sure why it is being accessed anonymously, since I am clearly logged in. Is there something obvious I am missing?
The Python GCP library (and others) uses another authentication mechanism than the gcloud command.
Follow this guide to set up your environment and have access to GCS with Python.
gcloud aut login sets up the gcloud command tool with your credentials.
However, the way forward when executing code, is to have a Service Account. Then, when the env. variable GOOGLE_APPLICATION_CREDENTIALS has been set. Python will use the Service Account credentials
Edit
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path_to_your_.json_credential_file"
Edit
And then, to download gs://my_bucket/my_file.csv to a file: (from the python-docs-samples)
download_blob('my_bucket', 'my_file.csv', 'local/path/to/file.csv')
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print('Blob {} downloaded to {}.'.format(
source_blob_name,
destination_file_name))

Configure terraform to connect to IBM Cloud

I try to connect terraform to IBM Cloud and I got messed up with
Softlayer and IBM Cloud credentials.
I followed the instruction on IBM sites to connect my terraform to the IBM Cloud and I am confused, because I may use SL and IBM Cloud connec-
tion information like API-keys etc.
I may not run terraform init and/or plan, because there are some
information missing. No I am asked for the organization (var.org).
Sometimes I got asked about the SL credentials. Our account started
in January 2019 and I am sure not to worked with SL at all and only
heard about API key from IBM cloud.
May some one have an example, how terraform.tfvars looks like to work
properly together with IBM Cloud Kubernetes Service, VPC and classic
infrastructure?
Thank you very much.
Jan
I recommend starting to take a look at these two tutorials, dealing with a LAMP stack on classic vertical servers and with Kubernetes and other services. Both provide step by step instructions and guide you through the process of setting up Terraform-based deployments.
They provide the necessary code in GitHub repos. For the Kubernetes sample credentials.tfvars you only need the API key:
ibmcloud_api_key = "your api key"
For the public_key a string containing the public key should be provided instead of a file that contains the key.
$ cat ~/.ssh/id_rsa.pub
ssh-rsa CCCde...
Then in terraform:
resource "ibm_compute_ssh_key" "test_ssh_key" {
public_key = "ssh-rsa CCCde..."
}
Alternatively you can use a key that you created earlier:
data "ibm_compute_ssh_key" "ssh_key" {
label = "yourexistingkey"
}
resource "ibm_compute_vm_instance" "onprem_vsi" {
ssh_key_ids = ["${data.ibm_compute_ssh_key.ssh_key.id}"]
}
Here is what you will need to run an init or plan for IBM Cloud Kubernetes Service clusters with terraform...
In your .tf file
terraform {
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
}
}
}
provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
iaas_classic_username = var.classic_username
iaas_classic_api_key = var.classic_api_key
}
In your shell, set the following environment variables
export IBMCLOUD_API_KEY=<value of your IBM Cloud api key>
export CLASSIC_API_KEY=<Value of you r IBM Cloud classic (i.e. SL) api key>
export CLASSIC_USERNAME=<Value of your IBM Cloud classic username>
Run your init as follows:
terraform init
Run your plan as follows:
terraform plan \
-var ibmcloud_api_key="${IBMCLOUD_API_KEY}" \
-var classic_api_key="${CLASSIC_API_KEY}" \
-var classic_username="${CLASSIC_USERNAME}"