Google Cloud, Kubernetes and Cloud SQL proxy: default Compute Engine service account issue - kubernetes

I have Google Cloud projects A, B, C, D. They all use similar setup for Kubernetes cluster and deployment. Projects A,B and C have been build months ago. They all use Google Cloud SQL proxy to connect to Google Cloud SQL service. Now when recently I started setting up the Kubernetes for project D, I get following error visible in the Stackdriver logging:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
I have compared the difference between the Kubernetes cluster between A,B,C and D but they appear to be same.
Here is the deployment I am using
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-site
labels:
system: projectA
spec:
selector:
matchLabels:
system: projectA
template:
metadata:
labels:
system: projectA
spec:
containers:
- name: web
image: gcr.io/customerA/projectA:alpha1
ports:
- containerPort: 80
env:
- name: DB_HOST
value: 127.0.0.1:3306
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# Change <INSTANCE_CONNECTION_NAME> here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- sh
- -c
- /cloud_sql_proxy -instances=my-gcloud-project:europe-west1:databaseName=tcp:3306
- -credential_file=/secrets/cloudsql/credentials.json
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
So it would appear that the default service account doesn't have enough permissions? Google Cloud doesn't allow enabling the Cloud SQL API when creating the cluster via Google Cloud console.
From what I have googled this issue some say that the problem was with the gcr.io/cloudsql-docker/gce-proxy image but I have tried newer versions but the same error still occurs.

I found solution to this problem and it was setting the service-account argument when creating the cluster. Note that I haven't tested what are the minimum required permissions for the new service account.
Here are the steps:
Create new service account, doesn't require API key. Name I used was "super-service"
Assign roles Cloud SQL admin, Compute Admin, Kubernetes Engine Admin, Editor to the new service account
Use gcloudto create the cluster like this using the new service account
gcloud container clusters create my-cluster \
--zone=europe-west1-c \
--labels=system=projectA \
--num-nodes=3 \
--enable-master-authorized-networks \
--enable-network-policy \
--enable-ip-alias \
--service-account=super-service#project-D.iam.gserviceaccount.com \
--master-authorized-networks <list-of-my-ips>
Then the cluster and the deployment at least was deployed without errors.

Related

DataDog how to disable Redis integration

I've installed the DataDog agent on my Kubernetes cluster using the Helm chart (https://github.com/helm/charts/tree/master/stable/datadog).
This works very well except for one thing. I have a number of Redis containers that have passwords set. This seems to be causing issues for the DataDog agent because it can't connect to Redis without a password.
I would like to either disable monitoring Redis completely or somehow bypass the Redis authentication. If I leave it as is I get a lot of error messages in the DataDog container logs and the redisdb integration shows up in yellow in the DataDog dashboard.
What are my options here?
I am not a fan of helm, but you can accomplish this in 2 ways:
via env vars: make use of DD_AC_EXCLUDE variable to exclude the Redis containers: eg DD_AC_EXCLUDE=name:prefix-redis
via a config map: mount an empty config map in /etc/datadog-agent/conf.d/redisdb.d/, below is an example where I renamed the auto_conf.yaml to auto_conf.yaml.example.
apiVersion: v1
data:
auto_conf.yaml.example: |
ad_identifiers:
- redis init_config: instances:
## #param host - string - required
## Enter the host to connect to.
#
- host: "%%host%%" ## #param port - integer - required
## Enter the port of the host to connect to.
#
port: "6379"
conf.yaml.example: |
init_config: instances: ## #param host - string - required
## Enter the host to connect to.
# [removed content]
kind: ConfigMap
metadata:
creationTimestamp: null
name: redisdb-d
alter the daemonset/deployment object:
[....]
volumeMounts:
- name: redisdb-d
mountPath: /etc/datadog-agent/conf.d/redisdb.d
[...]
volumes:
- name: redisdb-d
configMap:
name: redisdb-d
[...]

kubernetes fails to pull a private image [Google Cloud Container Registry, Digital Ocean]

I'm trying to setup GCR with kubernetes
and getting Error: ErrImagePull
Failed to pull image "eu.gcr.io/xxx/nodejs": rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/xxx/nodejs, repository does not exist or may require 'docker login'
Although I have setup the secret correctly in the service account, and added image pull secrets in the deployment spec
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: nodejs
name: nodejs
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodejs
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: "eu.gcr.io/xxx/nodejs"
name: nodejs
imagePullPolicy: Always
ports:
- containerPort: 8080
resources: {}
imagePullSecrets:
- name: gcr-json-key
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
restartPolicy: Always
status: {}
used this to add the secret, and it said created
kubectl create secret docker-registry gcr-json-key --docker-server=eu.gcr.io --docker-username=_json_key --docker-password="$(cat mycreds.json)" --docker-email=mygcpemail#gmail.com
How can I debug this, any ideas are welcome!
It looks like the issue is caused by lack of permission on the related service account
XXXXXXXXXXX-compute#XXXXXX.gserviceaccount.com which is missing Editor role.
Also,we need to restrict the scope to assign permissions only to push and pull images from google kubernetes engine, this account will need storage admin view permission which can be assigned by following the instructions mentioned in this article [1].
Additionally, to set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option to mention this scope "storage-rw"[2].
[1] https://cloud.google.com/container-registry/docs/access-control
[2]https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engineā€
If the VM instance for pushing or pulling images and the Container Registry storage bucket are in the same Google Cloud Platform project, the Compute Engine default service account is configured with appropriate permissions to push or pull images.
If the VM instance is in a different project or if the instance uses a different service account, you must configure access to the storage bucket used by the repository.
By default, a Compute Engine VM has the read-only access scope configured for storage buckets. To push private Docker images, your instance must have read-write storage access scope configured as described in Access scopes.
Please have 1 for further reference:
Please follow below table as 2:
Action Permission Role Role Title
Pull (Read Only) - storage.objects.get roles/storage.objectViewer Storage Object Viewer
storage.objects.list
Also, you could share if there having any error code as you are having trouble in any steps.

How to pass environmental variables in envconsul config file?

I read in the envconsul documentation this:
For additional security, tokens may also be read from the environment
using the CONSUL_TOKEN or VAULT_TOKEN environment variables
respectively. It is highly recommended that you do not put your tokens
in plain-text in a configuration file.
So, I have this envconsul.hcl file:
# the settings to connect to vault server
# "http://10.0.2.2:8200" is the Vault's address on the host machine when using Minikube
vault {
address = "${env(VAULT_ADDR)}"
renew_token = false
retry {
backoff = "1s"
}
token = "${env(VAULT_TOKEN)}"
}
# the settings to find the endpoint of the secrets engine
secret {
no_prefix = true
path = "secret/app/config"
}
However, I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Get $%7Benv%28VAULT_ADDR%29%7D/v1/secret/app/config: unsupported protocol scheme "" (retry attempt 1 after "1s")
As I understand it, it cannot do the variable substitution.
I tried to set "http://10.0.2.2:8200" and it works.
The same happens with the VAULT_TOKEN var.
If I hardcode the VAULT_ADDR, then I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Error making API request.
URL: GET http://10.0.2.2:8200/v1/secret/app/config
Code: 403. Errors:
* permission denied (retry attempt 2 after "2s")
Is there a way for this file to understand the environmental variables?
EDIT 1
This is my pod.yml file
---
apiVersion: v1
kind: Pod
metadata:
name: sample
spec:
serviceAccountName: vault-auth
restartPolicy: Never
# Add the ConfigMap as a volume to the Pod
volumes:
- name: vault-token
emptyDir:
medium: Memory
# Populate the volume with config map data
- name: config
configMap:
# `name` here must match the name
# specified in the ConfigMap's YAML
# -> kubectl create configmap vault-cm --from-file=./vault-configs/
name: vault-cm
items:
- key : vault-agent-config.hcl
path: vault-agent-config.hcl
- key : envconsul.hcl
path: envconsul.hcl
initContainers:
# Vault container
- name: vault-agent-auth
image: vault
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/vault
# This assumes Vault running on local host and K8s running in Minikube using VirtualBox
env:
- name: VAULT_ADDR
value: http://10.0.2.2:8200
# Run the Vault agent
args:
[
"agent",
"-config=/etc/vault/vault-agent-config.hcl",
"-log-level=debug",
]
containers:
- name: python
image: myappimg
imagePullPolicy: Never
ports:
- containerPort: 5000
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/envconsul
env:
- name: HOME
value: /home/vault
- name: VAULT_ADDR
value: http://10.0.2.2:8200
I. Within container specification set environmental variables (values in double quotes):
env:
- name: VAULT_TOKEN
value: "abcd1234"
- name: VAULT_ADDR
value: "http://10.0.2.2:8200"
Then refer to the values in envconsul.hcl
vault {
address = ${VAULT_ADDR}
renew_token = false
retry {
backoff = "1s"
}
token = ${VAULT_TOKEN}
}
II. Another option is to unseal the vault cluster (with the unseal key which was printed while initializing the vault cluster)
$ vault operator unseal
and then authenticate to the vault cluster using a root token.
$ vault login <your-generated-root-token>
More details
I tried many suggestions and nothing worked until I passed -vault-token argument to envconsul command like this:
envconsul -vault-token=$VAULT_TOKEN -config=/app/config.hcl -secret="/secret/debug/service" env
and in config.hcl it should be like this:
vault {
address = "http://kvstorage.try.direct:8200"
token = "${env(VAULT_TOKEN)}"
}

error: the server doesn't have resource type "svc"

Admins-MacBook-Pro:~ Harshin$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
i am following this document
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card
while i am trying to test my configuration in step 11 of configure kubectl for amazon eks
apiVersion: v1
clusters:
- cluster:
server: ...
certificate-authority-data: ....
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "kunjeti"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
Change "name: kubernetes" to actual name of your cluster.
Here is what I did to work it through....
1.Enabled verbose to make sure config files are read properly.
kubectl get svc --v=10
2.Modified the file as below:
apiVersion: v1
clusters:
- cluster:
server: XXXXX
certificate-authority-data: XXXXX
name: abc-eks
contexts:
- context:
cluster: abc-eks
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "abc-eks"
# - "-r"
# - "<role-arn>"
env:
- name: AWS_PROFILE
value: "aws"
I have faced a similar issue, however this is not a direct solution but workaround. Use AWS cli commands to create cluster rather than console. As per documentation, the user or role which creates cluster will have master access.
aws eks create-cluster --name <cluster name> --role-arn <EKS Service Role> --resources-vpc-config subnetIds=<subnet ids>,securityGroupIds=<security group id>
Make sure that EKS Service Role has IAM access(I have given Full however AssumeRole will do I guess).
The EC2 machine Role should have eks:CreateCluster and IAM access. Worked for me :)
I had this issue and found it was caused default key setting in ~/.aws/credentials.
We have a few AWS accounts for different customers plus a sandbox account for our own testing and research. So our credentials file looks something like this:
[default]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[cpproto]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[sandbox]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
I was messing around our sandbox account but the [default] section was pointing to another account.
Once I put the keys for sandbox into the default section the "kubectl get svc" command worked fine.
Seems we need a way to tell aws-iam-authenticator which keys to use same as --profile in the aws cli.
I guess you should uncomment "env" item and change your refer to ~/.aws/credentials
Because your aws_iam_authenticator requires exact AWS credentials.
Refer this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
To have the AWS IAM Authenticator for Kubernetes always use a specific named AWS credential profile (instead of the default AWS credential provider chain), uncomment the env lines and substitute with the profile name to use.

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"