kubernetes fails to pull a private image [Google Cloud Container Registry, Digital Ocean] - kubernetes

I'm trying to setup GCR with kubernetes
and getting Error: ErrImagePull
Failed to pull image "eu.gcr.io/xxx/nodejs": rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/xxx/nodejs, repository does not exist or may require 'docker login'
Although I have setup the secret correctly in the service account, and added image pull secrets in the deployment spec
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: nodejs
name: nodejs
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodejs
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: "eu.gcr.io/xxx/nodejs"
name: nodejs
imagePullPolicy: Always
ports:
- containerPort: 8080
resources: {}
imagePullSecrets:
- name: gcr-json-key
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
restartPolicy: Always
status: {}
used this to add the secret, and it said created
kubectl create secret docker-registry gcr-json-key --docker-server=eu.gcr.io --docker-username=_json_key --docker-password="$(cat mycreds.json)" --docker-email=mygcpemail#gmail.com
How can I debug this, any ideas are welcome!

It looks like the issue is caused by lack of permission on the related service account
XXXXXXXXXXX-compute#XXXXXX.gserviceaccount.com which is missing Editor role.
Also,we need to restrict the scope to assign permissions only to push and pull images from google kubernetes engine, this account will need storage admin view permission which can be assigned by following the instructions mentioned in this article [1].
Additionally, to set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option to mention this scope "storage-rw"[2].
[1] https://cloud.google.com/container-registry/docs/access-control
[2]https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engineā€

If the VM instance for pushing or pulling images and the Container Registry storage bucket are in the same Google Cloud Platform project, the Compute Engine default service account is configured with appropriate permissions to push or pull images.
If the VM instance is in a different project or if the instance uses a different service account, you must configure access to the storage bucket used by the repository.
By default, a Compute Engine VM has the read-only access scope configured for storage buckets. To push private Docker images, your instance must have read-write storage access scope configured as described in Access scopes.
Please have 1 for further reference:
Please follow below table as 2:
Action Permission Role Role Title
Pull (Read Only) - storage.objects.get roles/storage.objectViewer Storage Object Viewer
storage.objects.list
Also, you could share if there having any error code as you are having trouble in any steps.

Related

How to deploy helm charts which are stored in AWS ECR using argoCD

I want to deploy helm charts, which are stored in a repository in AWS ECR, in the kubernetes cluster using ArgoCD. But I am getting a 401 unauthorized issue. I have pasted the entire issue below
Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = `helm chart pull <aws account id>.dkr.ecr.<region>.amazonaws.com/testrepo:1.1.0` failed exit status 1: Error: unexpected status code [manifests 1.1.0]: 401 Unauthorized
Yes, you can use ECR for storing helm charts (https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html)
I have managed to add the repo to ArgoCD, but the token expires so it is not a complete solution.
argocd repo add XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com --type helm --name some-helmreponame --enable-oci --username AWS --password $(aws ecr get-login-password --region us-east-1)
Using the declarative repository definition (see https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories, or just override .argo-cd.configs.repositories in the Helm chart) it is actually quite easy to create a cron-job that updates the ECR credentials:
apiVersion: batch/v1
kind: CronJob
metadata:
name: argocd-ecr-credentials
spec:
schedule: '0 */6 * * *' # every 6 hours, since credentials expire every 12 hours
jobTemplate:
metadata:
name: argocd-ecr-credentials
spec:
template:
spec:
serviceAccountName: argocd-server
restartPolicy: OnFailure
containers:
- name: update-secret
image: alpine/k8s # Anything that contains kubectl + aws cli
command:
- /bin/bash
- "-c"
- |
PASSWORD=$(aws ecr get-login-password --region [your aws region] | base64 -w 0)
kubectl patch secret -n argocd argocd-repo-[name of your repository] --type merge -p "{\"data\": {\"password\": \"$PASSWORD\"}}"
ArgoCD repository secrets are usually called argocd-repo-* suffixed with the key of the repository entry in the values.yaml.
This will start a pod every 6 hours to do an ECR login and update the secret in kubernetes, that contains the repository definition for ArgoCD.
Make sure to use the argocd-server service account (or create your own) since the container will not be able to modify the secret otherwise.
I'm experimenting with the following (Not yet complete)
Create a secret for an AWS IAM role that allows you to get an ECR login password.
apiVersion: v1
kind: Secret
metadata:
name: aws-ecr-get-login-password-creds
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
AWS_ACCESS_KEY_ID: <Fill In>
AWS_SECRET_ACCESS_KEY: <Fill In>
Now create an ArgoCD workflow that either runs every 12 hours or runs on PreSync Hook (Completely untested, will try to keep this updated, anyone can update this for me).
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: aws-ecr-get-login-password-
annotations:
argocd.argoproj.io/hook: PreSync
spec:
entrypoint: update-ecr-login-password
templates:
# This is what will run.
# First the awscli
# Then the resource creation using the stdout of the previous step
- name: update-ecr-login-password
steps:
- - name: awscli
template: awscli
- - name: argocd-ecr-credentials
template: argocd-ecr-credentials
arguments:
parameters:
- name: password
value: "{{steps.awscli.outputs.result}}"
# Create a container that has awscli in it
# and run it to get the password using `aws ecr get-login-password`
- name: awscli
script:
image: amazon/aws-cli:latest
command: [bash]
source: |
aws ecr get-login-password --region us-east-1
# We need aws secrets that can run `aws ecr get-login-password`
envFrom:
- secretRef:
name: aws-ecr-get-login-password-creds
# Now we can create the secret that has the password in it
- name: argocd-ecr-credentials
inputs:
parameters:
- name: password
resource:
action: create
manifest: |
apiVersion: v1
kind: Secret
metadata:
name: argocd-ecr-credentials
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: 133696059149.dkr.ecr.us-east-1.amazonaws.com
username: AWS
password: {{inputs.parameters.password}}

k6: k6 --out json - open ./test.json: permission denied

I have created kubernetes cluster on digitalocean. and I have deployed k6 as a job on kubernetes cluster.
apiVersion: batch/v1
kind: Job
metadata:
name: benchmark
spec:
template:
spec:
containers:
- name: benchmark
image: loadimpact/k6:0.29.0
command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=./test.json", "/etc/k6-config/script.js"]
volumeMounts:
- name: config-volume
mountPath: /etc/k6-config
restartPolicy: Never
volumes:
- name: config-volume
configMap:
name: k6-config
this is how my k6-job.yaml file look like. After deploying it in kubernetes cluster I have checked the pods logs. it is showing permission denied error.
level=error msg="open ./test.json: permission denied"
how to solve this issue?
The k6 Docker image runs as an unprivileged user, but unfortunately the default work directory is set to /, so it has no permission to write there.
To work around this consider changing the JSON output path to /home/k6/out.json, i.e.:
command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=/home/k6/test.json", "/etc/k6-config/script.js"]
I'm one of the maintainers on the team, so will propose a change to the Dockerfile to set the WORKDIR to /home/k6 to make the default behavior a bit more intuitive.

Google Cloud, Kubernetes and Cloud SQL proxy: default Compute Engine service account issue

I have Google Cloud projects A, B, C, D. They all use similar setup for Kubernetes cluster and deployment. Projects A,B and C have been build months ago. They all use Google Cloud SQL proxy to connect to Google Cloud SQL service. Now when recently I started setting up the Kubernetes for project D, I get following error visible in the Stackdriver logging:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
I have compared the difference between the Kubernetes cluster between A,B,C and D but they appear to be same.
Here is the deployment I am using
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-site
labels:
system: projectA
spec:
selector:
matchLabels:
system: projectA
template:
metadata:
labels:
system: projectA
spec:
containers:
- name: web
image: gcr.io/customerA/projectA:alpha1
ports:
- containerPort: 80
env:
- name: DB_HOST
value: 127.0.0.1:3306
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# Change <INSTANCE_CONNECTION_NAME> here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- sh
- -c
- /cloud_sql_proxy -instances=my-gcloud-project:europe-west1:databaseName=tcp:3306
- -credential_file=/secrets/cloudsql/credentials.json
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
So it would appear that the default service account doesn't have enough permissions? Google Cloud doesn't allow enabling the Cloud SQL API when creating the cluster via Google Cloud console.
From what I have googled this issue some say that the problem was with the gcr.io/cloudsql-docker/gce-proxy image but I have tried newer versions but the same error still occurs.
I found solution to this problem and it was setting the service-account argument when creating the cluster. Note that I haven't tested what are the minimum required permissions for the new service account.
Here are the steps:
Create new service account, doesn't require API key. Name I used was "super-service"
Assign roles Cloud SQL admin, Compute Admin, Kubernetes Engine Admin, Editor to the new service account
Use gcloudto create the cluster like this using the new service account
gcloud container clusters create my-cluster \
--zone=europe-west1-c \
--labels=system=projectA \
--num-nodes=3 \
--enable-master-authorized-networks \
--enable-network-policy \
--enable-ip-alias \
--service-account=super-service#project-D.iam.gserviceaccount.com \
--master-authorized-networks <list-of-my-ips>
Then the cluster and the deployment at least was deployed without errors.

How to pass user credentials to (user-restricted) mounted volume inside Kubernetes Pod?

I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod.
The NFS folder /mount/protected has user access restrictions, i.e. only certain users can access this folder.
This is my Pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
secret:
secretName: my-secret
containers:
- name: my-container
image: <...>
command: ["/bin/sh"]
args: ["-c", "python /my-volume/test.py"]
volumeMounts:
- name: my-volume
mountPath: /my-volume
When applying it, I get the following error:
The Pod "my-pod" is invalid:
* spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type
* spec.containers[0].volumeMounts[0].name: Not found: "my-volume"
I created my-secret according to the following guide:
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret
So basically:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
username: bXktYXBw
password: PHJlZGFjdGVkPg==
But when I mount the folder /mount/protected with:
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
I get a permission denied error python: can't open file '/my-volume/test.py': [Errno 13] Permission denied when running a Pod that mounts this volume path.
My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?
You're trying to tell Kubernetes that my-volume should get its content from both a host path and a Secret, and it can only have one of those.
You don't need to manually specify a host path. Kubernetes will figure out someplace appropriate to put the Secret content and it will still be visible on the mountPath you specify within the container. (Specifying hostPath: at all is usually wrong, unless you can guarantee that the path will exist with the content you expect on every node in the cluster.)
So change:
volumes:
- name: my-volume
secret:
secretName: my-secret
# but no hostPath
I eventually figured out how to pass user credentials to a mounted directory within a Pod by using CIFS Flexvolume Plugin for Kubernetes (https://github.com/fstab/cifs).
With this Plugin, every user can pass her/his credentials to the Pod.
The user only needs to create a Kubernetes secret (cifs-secret), storing the username/password and use this secret for the mount within the Pod.
The volume is then mounted as follows:
(...)
volumes:
- name: test
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//server/share"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"