How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes - kubernetes

with the help of kubernetes I am running daily jobs on GKE, On a daily basis based on cron configured in kubernetes a new container spins up and try to insert some data into BigQuery.
The setup that we have is we have 2 different projects in GCP in one project we maintain the data in BigQuery in other project we have all the GKE running so when GKE has to interact with different project resource my guess is I have to set an environment variable with name GOOGLE_APPLICATION_CREDENTIALS which points to a service account json file, but since every day kubernetes is spinning up a new container I am not sure how and where I should set this variable.
Thanks in Advance!
NOTE: this file is parsed as a golang template by the drone-gke plugin.
---
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: "bas64JsonServiceAccount"
---
apiVersion: v1
kind: Pod
metadata:
name: adtech-ads-apidata-el-adunit-pod
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: "/etc/gcp"
readOnly: true
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
This is our cron jobs for loading the AdUnit Data
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: adtech-ads-apidata-el-adunit
spec:
schedule: "*/5 * * * *"
suspend: false
concurrencyPolicy: Replace
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
image: {{.image}}
args:
- -cp
- opt/nyt/DFPDataIngestion-1.0-jar-with-dependencies.jar
- com.nyt.cron.AdUnitJob
env:
- name: ENV_APP_NAME
value: "{{.env_app_name}}"
- name: ENV_APP_CONTEXT_NAME
value: "{{.env_app_context_name}}"
- name: ENV_GOOGLE_PROJECTID
value: "{{.env_google_projectId}}"
- name: ENV_GOOGLE_DATASETID
value: "{{.env_google_datasetId}}"
- name: ENV_REPORTING_DATASETID
value: "{{.env_reporting_datasetId}}"
- name: ENV_ADBRIDGE_DATASETID
value: "{{.env_adbridge_datasetId}}"
- name: ENV_SALESFORCE_DATASETID
value: "{{.env_salesforce_datasetId}}"
- name: ENV_CLOUD_PLATFORM_URL
value: "{{.env_cloud_platform_url}}"
- name: ENV_SMTP_HOST
value: "{{.env_smtp_host}}"
- name: ENV_TO_EMAIL
value: "{{.env_to_email}}"
- name: ENV_FROM_EMAIL
value: "{{.env_from_email}}"
- name: ENV_AWS_USERNAME
value: "{{.env_aws_username}}"
- name: ENV_CLIENT_ID
value: "{{.env_client_id}}"
- name: ENV_REFRESH_TOKEN
value: "{{.env_refresh_token}}"
- name: ENV_NETWORK_CODE
value: "{{.env_network_code}}"
- name: ENV_APPLICATION_NAME
value: "{{.env_application_name}}"
- name: ENV_SALESFORCE_USERNAME
value: "{{.env_salesforce_username}}"
- name: ENV_SALESFORCE_URL
value: "{{.env_salesforce_url}}"
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/gcp/sa_credentials.json"
- name: ENV_CLOUD_SQL_URL
valueFrom:
secretKeyRef:
name: secrets
key: cloud_sql_url
- name: ENV_AWS_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: aws_password
- name: ENV_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secrets
key: dfp_client_secret
- name: ENV_SALESFORCE_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: salesforce_password
restartPolicy: OnFailure

So, if your GKE project is project my-gke, and the project containing the services/things your GKE containers need access to is project my-data, one approach is to:
Create a service account in the my-data project. Give it whatever GCP roles/permissions are needed (ex. roles/bigquery.
dataViewer if you have some BigQuery tables that your my-gke GKE containers need to read).
Create a service account key for that service account. When you do this in the console following https://cloud.google.com/iam/docs/creating-managing-service-account-keys, you should automatically download a .json file containing the SA credentials.
Create a Kubernetes secret resource for those service account credentials. It might look something like this:
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: <contents of running 'base64 the-downloaded-SA-credentials.json'>
Mount the credentials in the container that needs access:
[...]
spec:
containers:
- name: my-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: /etc/gcp
readOnly: true
[...]
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
Set the GOOGLE_APPLICATION_CREDENTIALS environment variable in the container to point to the path of the mounted credentials:
[...]
spec:
containers:
- name: my-container
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/sa_credentials.json
With that, any official GCP clients (ex. the GCP Python client, GCP Java Client, gcloud CLI, etc. should respect the GOOGLE_APPLICATION_CREDENTIALS env var and, when making API requests, automatically use the credentials of the my-data service account that you created and mounted the credentials .json file for.

Related

Assigning configmap entries to container's env variables

I have below ConfigMap code which pulls secrets from GSM.
kind: ConfigMap
metadata:
name: db-config
labels:
app: poc
data:
entrypoint.sh: |
#!/usr/bin/env bash
set -euo pipefail
echo $(gcloud secrets versions access --project=<project> --secret=<secret-name>) >> /var/config/dburl.env
---
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
?? # Assign value fetched in configmap
How to assign values from CM created files to container's env variables? Or, is there any other approach available to achieve this?
I need send couple of env variable to Spring cloud config service. It's hard to find any guide/documentation for this. Any help is appreciated!
One of the best ways to access secrets in google secret manager from GKE is by using the operator External Secret Operator, you can install it easily using helm.
Once is installed, you create a service account with the role roles/secretmanager.secretAccessor, then you download the creds (key file) and save them in a k8s secret:
kubectl create secret generic gcpsm-secret --from-file=service-account-credentials=key.json
Then you can define your secret store (it is not execlusive for GCP, it works with other secret manager such as AWS secret manager...):
apiVersion: external-secrets.io/v1alpha1
kind: SecretStore
metadata:
name: gcp-secret-manager
spec:
provider:
gcpsm:
auth:
secretRef:
secretAccessKeySecretRef:
name: gcpsm-secret # the secret you created in the first step
key: service-account-credentials
projectID: <your project id>
Now, you can create an external secret, and the operator will read the secret from the secret manager and create a k8s secret for you
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
name: gcp-external-secret
spec:
secretStoreRef:
kind: SecretStore
name: gcp-secret-manager
target:
name: k8s-secret # the k8s secret name
data:
- secretKey: host # the key name in the secret
remoteRef:
key: <secret-name in gsm>
Finally in your pod, you can access the secret by:
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
valueFrom:
secretKeyRef:
name: k8s-secret
key: host
If you update the secret value in the secret manager, you should recreate the external secret to update the k8s secret value.

How to supply a value of a server in NFS mount in a k8 Deployment via a ConfigMap

I'm writing a helm chart where I need to supply a nfs.server value for the volume mount from the ConfigMap (efs-url in the example below).
There are examples in the docs on how to pass the value from the ConfigMap to env variables or even mount ConfigMaps. I understand how I can pass this value from the values.yaml but I just can't find an example on how it can be done using a ConfigMap.
I have control over this ConfigMap so I can reformat it as needed.
Am I missing something very obvious?
Is it even possible to do?
If not, what are the possible workarounds?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-url
data:
url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: <<< VALUE SHOULD COME FROM THE CONFIG MAP >>>
path: /
Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap
is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
To read more about ConfigMaps and how they can be utilized one can visit the "ConfigMaps" section and the "Configure a Pod to Use a ConfigMap" section.

Providing a .env file in Kubernetes

How do I provide a .env file in Kubernetes. I am using a Node.JS package that populates my process.env via my .env file.
You can do it in two ways:
Providing env variable for the container:
During creation of a pod, you can set environment variables for the containers that run in that Pod. To set environment variables, include the env field in the configuration file.
ex:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Using ConfigMaps:
first you need to create a ConfigMaps, ex is below, here data field refers your values in a key-value pair.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Now, use envFrom to define all of the ConfigMap's data as container environment variables, ex:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
you can even specify individual field by giving env like below:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
Ref: configmap and env set

Pod does not see secrets

The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000

Issue mounting Cloud SQL credentials to a container in Kubernetes

I'm attempting to migrate over to Cloud SQL (Postgres). I have the following deployment in Kubernetes, having followed these instructions https://cloud.google.com/sql/docs/mysql/connect-container-engine :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: menu-service
spec:
replicas: 1
template:
metadata:
labels:
app: menu-service
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
containers:
- image: gcr.io/cloudsql-docker/gce-proxy:1.11
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=tabb-168314:europe-west2:production=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: menu-service
image: eu.gcr.io/tabb-168314/menu-service:develop
imagePullPolicy: Always
env:
- name: MICRO_BROKER
value: "nats"
- name: MICRO_BROKER_ADDRESS
value: "nats.staging:4222"
- name: MICRO_REGISTRY
value: "kubernetes"
- name: ENV
value: "staging"
- name: PORT
value: "8080"
- name: POSTGRES_HOST
value: "127.0.0.1:5432"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_PASS
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: POSTGRES_DB
value: "menus"
ports:
- containerPort: 8080
But unfortunately I'm getting this error when trying to update the deployment:
MountVolume.SetUp failed for volume "kubernetes.io/secret/69b0ec99-baaf-11e7-82b8-42010a84010c-cloudsql-instance-credentials" (spec.Name: "cloudsql-instance-credentials") pod "69b0ec99-baaf-11e7-82b8-42010a84010c" (UID: "69b0ec99-baaf-11e7-82b8-42010a84010c") with: secrets "cloudsql-instance-credentials" not found
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "staging"/"menu-service-1982520680-qzwzn". list of unattached/unmounted volumes=[cloudsql-instance-credentials]
Have I missed something here?
You are missing (at least) one of the secrets needed to start up this Pod, namely cloudsql-instance-credentials.
From https://cloud.google.com/sql/docs/mysql/connect-container-engine:
You need two secrets to enable your Container Engine application to access the data in your Cloud SQL instance:
The cloudsql-instance-credentials secret contains the service account.
The cloudsql-db-credentials secret provides the proxy user account and password. (I think you have this created, I can't see an error message about this one)
To create your secrets:
Create the secret containing the Service Account which enables authentication to Cloud SQL:
kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=[PROXY_KEY_FILE_PATH]
[...]
The link above also describes how to create a GCP service account for this purpose, if you don't have one created already.