Issue mounting Cloud SQL credentials to a container in Kubernetes - postgresql

I'm attempting to migrate over to Cloud SQL (Postgres). I have the following deployment in Kubernetes, having followed these instructions https://cloud.google.com/sql/docs/mysql/connect-container-engine :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: menu-service
spec:
replicas: 1
template:
metadata:
labels:
app: menu-service
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
containers:
- image: gcr.io/cloudsql-docker/gce-proxy:1.11
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=tabb-168314:europe-west2:production=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: menu-service
image: eu.gcr.io/tabb-168314/menu-service:develop
imagePullPolicy: Always
env:
- name: MICRO_BROKER
value: "nats"
- name: MICRO_BROKER_ADDRESS
value: "nats.staging:4222"
- name: MICRO_REGISTRY
value: "kubernetes"
- name: ENV
value: "staging"
- name: PORT
value: "8080"
- name: POSTGRES_HOST
value: "127.0.0.1:5432"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_PASS
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: POSTGRES_DB
value: "menus"
ports:
- containerPort: 8080
But unfortunately I'm getting this error when trying to update the deployment:
MountVolume.SetUp failed for volume "kubernetes.io/secret/69b0ec99-baaf-11e7-82b8-42010a84010c-cloudsql-instance-credentials" (spec.Name: "cloudsql-instance-credentials") pod "69b0ec99-baaf-11e7-82b8-42010a84010c" (UID: "69b0ec99-baaf-11e7-82b8-42010a84010c") with: secrets "cloudsql-instance-credentials" not found
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "staging"/"menu-service-1982520680-qzwzn". list of unattached/unmounted volumes=[cloudsql-instance-credentials]
Have I missed something here?

You are missing (at least) one of the secrets needed to start up this Pod, namely cloudsql-instance-credentials.
From https://cloud.google.com/sql/docs/mysql/connect-container-engine:
You need two secrets to enable your Container Engine application to access the data in your Cloud SQL instance:
The cloudsql-instance-credentials secret contains the service account.
The cloudsql-db-credentials secret provides the proxy user account and password. (I think you have this created, I can't see an error message about this one)
To create your secrets:
Create the secret containing the Service Account which enables authentication to Cloud SQL:
kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=[PROXY_KEY_FILE_PATH]
[...]
The link above also describes how to create a GCP service account for this purpose, if you don't have one created already.

Related

Pod does not see secrets

The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000

kubernetes deployment file inject environment variables on a pre script

I have an elixir app connection to postgres using sql proxy
here is my deployment.yaml I deploy on kubernetes and works well,
the postgres connection password and user name are taken in the image from the environment variables in the yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: my-app
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: my-app
image: my-image:1.0.1
volumeMounts:
- name: secrets-volume
mountPath: /secrets
readOnly: true
- name: config-volume
mountPath: /beamconfig
ports:
- containerPort: 80
args:
- foreground
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: 123456
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=my-project:region:my-postgres-instance=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
# volumes
volumes:
- name: secrets-volume
secret:
secretName: gcloud-json
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
now due to security requirements I'd like to put sensitive environments encrypted, and have a script decrypting them
my yaml file would look like this:
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: ENCRYPTED_POSTGRES_USERNAME
value: hgkdhrkhgrk
- name: ENCRYPTED_POSTGRES_PASSWORD
value: fkjeshfke
then I have script that would run on all environments with prefix ENCRYPTED_ , will decrypt them and insert the dycrpted value under the environment variable without the ENCRYPTED_ prefix
is there a way to do that?
the environments variables should be injected before the image starts running
another requirement is that the pod running the image would decrypt the variables - since its the only one which has permissions to do it (working with work load identity)
something like:
- command:
- sh
- /decrypt_and_inject_environments.sh

Chaincode instantiation fails in AWS EKS network

I have a simple one orderer and one peer network deployed on AWS-Elastic Kubernetes Services. I created the network using the official eks documentation. I am able to bring up the orderer as well as the peer within the pods. The peer is able to create and join the channel as well as anchor peer update transaction is successful as well.
High-level configuration for the pods:
Started as Stateful Sets
Pods use dynamic PV with Storage Class configured for aws-ebs
Exposed using service type load balancer
Uses Docker In Docker(DIND) approach for chaincode
Detailed Configuration for Orderer Pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-orderer
labels:
app: allparticipants-orderer
spec:
serviceName: orderer
replicas: 1
selector:
matchLabels:
app: allparticipants-orderer
template:
metadata:
labels:
app: allparticipants-orderer
spec:
containers:
- name: allparticipants-orderer
image: <docker-hub-url>/orderer:0.1
imagePullPolicy: Always
command: ["sh", "-c", "orderer"]
ports:
- containerPort: 7050
env:
- name: FABRIC_LOGGING_SPEC
value: DEBUG
- name: ORDERER_GENERAL_LOGLEVEL
value: DEBUG
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /var/hyperledger/orderer/orderer.genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /var/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /var/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /var/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: /var/hyperledger/orderer/tls/ca.crt
volumeMounts:
- name: allparticipants-orderer-ledger
mountPath: /var/ledger
volumeClaimTemplates:
- metadata:
name: allparticipants-orderer-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-orderer-sc
resources:
requests:
storage: 1Gi
Detailed configuration for Peer along with DIND in the same pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-peer0
labels:
app: allparticipants-peer0
spec:
serviceName: allparticipants-peer0
replicas: 1
selector:
matchLabels:
app: allparticipants-peer0
template:
metadata:
labels:
app: allparticipants-peer0
spec:
containers:
- name: docker
env:
- name: DOCKER_TLS_CERTDIR
value:
securityContext:
privileged: true
image: "docker:stable-dind"
ports:
- containerPort: 2375
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
- name: allparticipants-peer0
image: <docker-hub-url>/peer0:0.1
imagePullPolicy: Always
command: ["sh", "-c", "peer node start"]
ports:
- containerPort: 7051
env:
- name: CORE_VM_ENDPOINT
value: http://localhost:2375
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: FABRIC_LOGGING_SPEC
value: debug
- name: CORE_LOGGING_PEER
value: debug
- name: CORE_LOGGING_CAUTHDSL
value: debug
- name: CORE_LOGGING_GOSSIP
value: debug
- name: CORE_LOGGING_LEDGER
value: debug
- name: CORE_LOGGING_MSP
value: info
- name: CORE_LOGGING_POLICIES
value: debug
- name: CORE_LOGGING_GRPC
value: debug
- name: CORE_LEDGER_STATE_STATEDATABASE
value: goleveldb
- name: GODEBUG
value: "netdns=go"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_COMMITTER_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /etc/hyperledger/fabric/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /etc/hyperledger/fabric/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /etc/hyperledger/fabric/tls/ca.crt
- name: CORE_PEER_ID
value: allparticipants-peer0
- name: CORE_PEER_ADDRESS
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LISTENADDRESS
value: 0.0.0.0:7051
- name: CORE_PEER_EVENTS_ADDRESS
value: 0.0.0.0:7053
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LOCALMSPID
value: AllParticipantsMSP
- name: CORE_PEER_MSPCONFIGPATH
value: <path-to-msp-of-peer0>
- name: ORDERER_ADDRESS
value: <orderer-load-balancer-url>:7050
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_CFG_PATH
value: /etc/hyperledger/fabric
volumeMounts:
- name: allparticipants-peer0-ledger
mountPath: /var/ledger
- name: dockersock
mountPath: /host/var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: dockervolume
persistentVolumeClaim:
claimName: docker-pvc
volumeClaimTemplates:
- metadata:
name: allparticipants-peer0-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-peer0-sc
resources:
requests:
storage: 1Gi
Not including Storage Class and Service configuration for the pods as they seem to work fine.
So, as stated earlier, the peer is able to create and join the channel as well as I am able to install the chaincode in the peer. However, while instantiating the chaincode from within the peer, I get the following error:
Error: error endorsing chaincode: rpc error: code = Unavailable desc = transport is closing
Apart from this the Docker(DIND) container started within peer0's pod has following warning logs:
level=warning msg="887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd/mounts/shm, flags: 0x2: no such file or directory"
Additionally, there are no logs in this docker container when I submit the following instantiation request:
peer chaincode instantiate -o $ORDERER_ADDRESS -C carchannel -n fabcar
-l node -v 0.1.1 -c '{"Args":[]}' -P "OR ('AllParticipantsMSP.member','AllParticipantsMSP.peer', 'AllParticipantsMSP.admin', 'AllParticipantsMSP.client')"
I tried searching for similar issues but not able to find one that matches this eks one.
Is there an issue with pod configuration or eks configuration? Not able to get past this one. Can someone please point me in the right direction? I am quite new to K8s.
Update 1:
I updated the service type to Load Balancer keeping rest of the configurations similar. Still I get the same error.
Update 2:
Configured DIND approach for the chaincode container.
Update 3:
Mounted pv and pvc for the dind container as well as updated the CORE_VM_ENDPOINT access protocol from tcp to http.
The issue here was the image used for Docker In Docker container. I downgraded the docker-in-docker image version from docker:stable-dind one to docker:18-dind. The stable image version has TLS enabled by default. In my case, I tried setting the value of the environment variable DOCKER_TLS_CERTDIR to blank. But that did not work out.
Attaching the snippet of the DIND configuration:
containers:
- name: docker
securityContext:
privileged: true
image: "docker:18-dind"
ports:
- containerPort: 2375
protocol: TCP
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
Note: While I have been able to work this out, I am not marking this as the accepted answer since TLS might be required to instantiate the chaincode and there should be a way to use that and I would be open to and looking out for those answers.

How to convert docker-compose to kubernetes?

I have an example project with nginx, php-fpm, mysql:
https://gitlab.com/x-team-blog/docker-compose-php
This is the docker-compose file:
version: '3'
services:
php-fpm:
build:
context: ./php-fpm
volumes:
- ../src:/var/www
nginx:
build:
context: ./nginx
volumes:
- ../src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./database/data.sql:/docker-entrypoint-initdb.d/data.sql
I'd like to use Kubernetes in Google Cloud and for this I have created several files:
Persistent Volume database-claim0-persistentvolumeclaim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Mi
Also I created secrets:
kubectl create secret generic mysql --from-literal=MYSQL_DATABASE=mydb --from-literal=MYSQL_PASSWORD=secret --from-literal=MYSQL_ROOT_PASSWORD=docker --from-literal=MYSQL_USER=myuser
Then I created a service:
apiVersion: v1
kind: Service
metadata:
name: database
labels:
app: database
spec:
type: ClusterIP
ports:
- port: 3306
selector:
app: database
And now I'd like to create database-deployment.yaml, but I don't know what I have to write in the volumeMounts section for copy sql file like docker-compose:
- ./database/data.sql:/docker-entrypoint-initdb.d/data.sql
database-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
labels:
app: database
spec:
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- image: mariadb
name: database
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_DATABASE
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_PASSWORD
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_USER
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: database-claim0
UPD:
My mistake. I did not push images first.
Use kompose
INFO Kubernetes file "nginx-service.yaml" created
INFO Kubernetes file "database-deployment.yaml" created
INFO Kubernetes file "database-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-deployment.yaml" created
INFO Kubernetes file "nginx-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim1-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim2-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim3-persistentvolumeclaim.yaml" created
INFO Kubernetes file "php-fpm-deployment.yaml" created
INFO Kubernetes file "php-fpm-claim0-persistentvolumeclaim.yaml" created
This is an Online tool here which help generating most of the Docker compose file converted to the Kubernetes Resources

How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes

with the help of kubernetes I am running daily jobs on GKE, On a daily basis based on cron configured in kubernetes a new container spins up and try to insert some data into BigQuery.
The setup that we have is we have 2 different projects in GCP in one project we maintain the data in BigQuery in other project we have all the GKE running so when GKE has to interact with different project resource my guess is I have to set an environment variable with name GOOGLE_APPLICATION_CREDENTIALS which points to a service account json file, but since every day kubernetes is spinning up a new container I am not sure how and where I should set this variable.
Thanks in Advance!
NOTE: this file is parsed as a golang template by the drone-gke plugin.
---
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: "bas64JsonServiceAccount"
---
apiVersion: v1
kind: Pod
metadata:
name: adtech-ads-apidata-el-adunit-pod
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: "/etc/gcp"
readOnly: true
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
This is our cron jobs for loading the AdUnit Data
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: adtech-ads-apidata-el-adunit
spec:
schedule: "*/5 * * * *"
suspend: false
concurrencyPolicy: Replace
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
image: {{.image}}
args:
- -cp
- opt/nyt/DFPDataIngestion-1.0-jar-with-dependencies.jar
- com.nyt.cron.AdUnitJob
env:
- name: ENV_APP_NAME
value: "{{.env_app_name}}"
- name: ENV_APP_CONTEXT_NAME
value: "{{.env_app_context_name}}"
- name: ENV_GOOGLE_PROJECTID
value: "{{.env_google_projectId}}"
- name: ENV_GOOGLE_DATASETID
value: "{{.env_google_datasetId}}"
- name: ENV_REPORTING_DATASETID
value: "{{.env_reporting_datasetId}}"
- name: ENV_ADBRIDGE_DATASETID
value: "{{.env_adbridge_datasetId}}"
- name: ENV_SALESFORCE_DATASETID
value: "{{.env_salesforce_datasetId}}"
- name: ENV_CLOUD_PLATFORM_URL
value: "{{.env_cloud_platform_url}}"
- name: ENV_SMTP_HOST
value: "{{.env_smtp_host}}"
- name: ENV_TO_EMAIL
value: "{{.env_to_email}}"
- name: ENV_FROM_EMAIL
value: "{{.env_from_email}}"
- name: ENV_AWS_USERNAME
value: "{{.env_aws_username}}"
- name: ENV_CLIENT_ID
value: "{{.env_client_id}}"
- name: ENV_REFRESH_TOKEN
value: "{{.env_refresh_token}}"
- name: ENV_NETWORK_CODE
value: "{{.env_network_code}}"
- name: ENV_APPLICATION_NAME
value: "{{.env_application_name}}"
- name: ENV_SALESFORCE_USERNAME
value: "{{.env_salesforce_username}}"
- name: ENV_SALESFORCE_URL
value: "{{.env_salesforce_url}}"
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/gcp/sa_credentials.json"
- name: ENV_CLOUD_SQL_URL
valueFrom:
secretKeyRef:
name: secrets
key: cloud_sql_url
- name: ENV_AWS_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: aws_password
- name: ENV_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secrets
key: dfp_client_secret
- name: ENV_SALESFORCE_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: salesforce_password
restartPolicy: OnFailure
So, if your GKE project is project my-gke, and the project containing the services/things your GKE containers need access to is project my-data, one approach is to:
Create a service account in the my-data project. Give it whatever GCP roles/permissions are needed (ex. roles/bigquery.
dataViewer if you have some BigQuery tables that your my-gke GKE containers need to read).
Create a service account key for that service account. When you do this in the console following https://cloud.google.com/iam/docs/creating-managing-service-account-keys, you should automatically download a .json file containing the SA credentials.
Create a Kubernetes secret resource for those service account credentials. It might look something like this:
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: <contents of running 'base64 the-downloaded-SA-credentials.json'>
Mount the credentials in the container that needs access:
[...]
spec:
containers:
- name: my-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: /etc/gcp
readOnly: true
[...]
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
Set the GOOGLE_APPLICATION_CREDENTIALS environment variable in the container to point to the path of the mounted credentials:
[...]
spec:
containers:
- name: my-container
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/sa_credentials.json
With that, any official GCP clients (ex. the GCP Python client, GCP Java Client, gcloud CLI, etc. should respect the GOOGLE_APPLICATION_CREDENTIALS env var and, when making API requests, automatically use the credentials of the my-data service account that you created and mounted the credentials .json file for.