ImagePullSecrets GCR - kubernetes

I am having an issue configuring GCR with ImagePullSecrets in my deployment.yaml file. It cannot download the container due to permission
Failed to pull image "us.gcr.io/optimal-jigsaw-185903/syncope-deb": rpc error: code = Unknown desc = Error response from daemon: denied: Permission denied for "latest" from request "/v2/optimal-jigsaw-185903/syncope-deb/manifests/latest".
I am sure that I am doing something wrong but I followed this tutorial (and others like it) but with still no luck.
https://ryaneschinger.com/blog/using-google-container-registry-gcr-with-minikube/
The pod logs are equally useless:
"syncope-deb" in pod "syncope-deployment-64479cdcf5-cng57" is waiting to start: trying and failing to pull image
My deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: syncope-deployment
namespace: default
spec:
# 3 Pods should exist at all times.
replicas: 1
# Keep record of 2 revisions for rollback
revisionHistoryLimit: 2
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: syncope-deb
spec:
imagePullSecrets:
- name: mykey
containers:
- name: syncope-deb
# Run this image
image: us.gcr.io/optimal-jigsaw-185903/syncope-deb
ports:
- containerPort: 9080
Any I have a key in my default namespace called "mykey" that looks like (Edited out the Secure Data):
{"https://gcr.io":{"username":"_json_key","password":"{\n \"type\": \"service_account\",\n \"project_id\": \"optimal-jigsaw-185903\",\n \"private_key_id\": \"EDITED_TO_PROTECT_THE_INNOCENT\",\n \"private_key\": \"-----BEGIN PRIVATE KEY-----\\EDITED_TO_PROTECT_THE_INNOCENT\\n-----END PRIVATE KEY-----\\n\",\n \"client_email\": \"bobs-service#optimal-jigsaw-185903.iam.gserviceaccount.com\",\n \"client_id\": \"109145305665697734423\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/bobs-service%40optimal-jigsaw-185903.iam.gserviceaccount.com\"\n}","email":"redfalconinc#gmail.com","auth":"EDITED_TO_PROTECT_THE_INNOCENT"}}
I even loaded that user up with the permissions of:
Editor Cloud Container
Builder Cloud Container
Builder Editor Service
Account Actor Service
Account Admin Storage
Admin Storage Object
Admin Storage Object Creator
Storage Object Viewer
Any help would be appreciated as I am spending a lot of time on seemingly a very simple problem.

The issue is most likely caused by you using a secret of type dockerconfigjson and having valid dockercfg in it. The kubectl command changed at some point that causes this.
Can you check what it is marked as dockercfg or dockerconfigjson and then check if its valid dockerconfigjson.
The json you have provided is dockercfg (not the new format)
See https://github.com/kubernetes/kubernetes/issues/12626#issue-100691532 for info about the formats

Related

Kubectl error upon applying agones fleet: ensure CRDs are installed first

I am using minikube (docker driver) with kubectl to test an agones fleet deployment. Upon running kubectl apply -f lobby-fleet.yml (and when I try to apply any other agones yaml file) I receive the following error:
error: resource mapping not found for name: "lobby" namespace: "" from "lobby-fleet.yml": no matches for kind "Fleet" in version "agones.dev/v1"
ensure CRDs are installed first
lobby-fleet.yml:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: lobby
spec:
replicas: 2
scheduling: Packed
template:
metadata:
labels:
mode: lobby
spec:
ports:
- name: default
portPolicy: Dynamic
containerPort: 7600
container: lobby
template:
spec:
containers:
- name: lobby
image: gcr.io/agones-images/simple-game-server:0.12 # Modify to correct image
I am running this on WSL2, but receive the same error when using the windows installation of kubectl (through choco). I have minikube installed and running for ubuntu in WSL2 using docker.
I am still new to using k8s, so apologies if the answer to this question is clear, I just couldn't find it elsewhere.
Thanks in advance!
In order to create a resource of kind Fleet, you have to apply the Custom Resource Definition (CRD) that defines what is a Fleet first.
I've looked into the YAML installation instructions of agones, and the manifest contains the CRDs. you can find it by searching kind: CustomResourceDefinition.
I recommend you to first try to install according to the instructions in the docs.

Error in deploy stage: "lchmod (file attributes) error: Not supported"

I am attempting to deploy an image "casbin-role-backend" to cloud, but it always failed.
The following is found from log:
Preparing to start the job...
Pipeline image: latest
Preparing the build artifacts...
lchmod (file attributes) error: Not supported
.....
DEPLOYING using manifest
+++ kubectl apply --namespace default -f ./tmp.deployment.yaml
deployment.apps/casbin-role-backend unchanged
The Service "casbin-role-backend" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
+++ set +x
CHECKING deployment rollout of casbin-role-backend
+++ kubectl rollout status deploy/casbin-role-backend --watch=true --timeout=150s --namespace default
error: deployment "casbin-role-backend" exceeded its progress deadline
+++ STATUS=fail
+++ set +x
SHOWING last events
LAST SEEN TYPE REASON OBJECT MESSAGE
41m Warning Failed pod/casbin-role-mgt-ui-7d59b6d4cf-2pbhm Error: InvalidImageName
2m11s Warning InspectFailed pod/casbin-role-backend-68d76464dd-vbvch Failed to apply default image tag "//:": couldn't parse image reference "//:": invalid reference format
...
DEPLOYMENT FAILED
....
OK
Finished: FAILED
And below is my deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: casbin-role-backend
labels:
app: app
spec:
type: NodePort
ports:
- port: 3000
name: http
nodePort: 30080
selector:
app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: casbin-role-backend
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: casbin-role-backend
image: xxx/casbin-role-backend
ports:
- containerPort: 3000
Does anybody know what error is it? I had searched it for some time but still cannot find what is it and how to fix.
Update:
The source code is originated from below, and I added Dockerfile and deployment.yaml to deploy it on k8s.
https://github.com/alikhan866/Casbin-Role-Mgt-Dashboard-RBAC
Dockerfile source:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /dist
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
# add app
COPY . ./
# start app
CMD ["npm", "run dev"]
I see two issues here:
1.
The Service "casbin-role-backend" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
It means that the port used by the nodePort service is already in use. You can list these services with: kubectl get svc --all-namespaces | grep '30080' and change the port value or delete the service. Also, make sure that you specify the proper namespace.
2.
2m11s Warning InspectFailed pod/casbin-role-backend-68d76464dd-vbvch Failed to apply default image tag "//:": couldn't parse image reference "//:": invalid reference format`
My educated guess here is that your image name is invalid because it starts with https:// or ://. A proper image name should look like this:
image: repository:organization_name/image_name:image_version

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

ErrImagePull on kubernetes

I suffer from this error when I deploy a pod.
the image lies on google container registry within the same project as the cluster
i can pull the image from the registry on my local computer
i cannot pull the image if I ssh into the instance
From the docs it states that this should work out of the box. I checked and storage read access is indeed there.
Here's the config:
apiVersion: v1
kind: ReplicationController
metadata:
name: luigi
spec:
replicas: 1
selector:
app: luigi
template:
metadata:
name: luigi
labels:
app: luigi
spec:
containers:
- name: scheduler
image: eu.gcr.io/bi/luigi/scheduler:latest
command: ['/usr/src/app/run_scheduler.sh']
- name: worker
image: eu.gcr.io/bi/luigi/scheduler:latest
command: ['/usr/src/app/run_worker.sh']
Describe gives me:
Failed to pull image "eu.gcr.io/bi/luigi/scheduler:latest": rpc error: code = Unknown desc = Error response from daemon: repository eu.gcr.io/bi/luigi/scheduler not found: does not exist or no pull access
From the error message, it seems to be caused by the absence of the credentials to download the image from the docker registry. Please note that this access credentials are "client specific". In this case, when kubernetes (kubelet to be specific) is the client and it needs the imagepullsecret to present the necessary credentials.
Please add the imagepullsecret with required credentials and it should work

Private repository passing through kubernetes yaml file

We have tried to setup hivemq manifest file. We have hivemq docker image in our private repository
Step1: I have logged into the private repository like
docker login "private repo name"
It was success
After that I have tried to create manifest file for that like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hivemq
spec:
replicas: 1
template:
metadata:
labels:
name: hivemq1
spec:
containers:
- env:
xxxxx some envronment values I have passed
name: hivemq
image: privatereponame:portnumber/directoryname/hivemq:
ports:
- containerPort: 1883
Its successfully creating, but I am getting the below issues. Could you please help any one to solve this issue.
hivemq-4236597916-mkxr4 0/1 ImagePullBackOff 0 1h
Logs:
Error from server (BadRequest): container "hivemq16" in pod "hivemq16-1341290525-qtkhb" is waiting to start: InvalidImageName
Some times I am getting that kind of issues
Error from server (BadRequest): container "hivemq" in pod "hivemq-4236597916-mkxr4" is waiting to start: trying and failing to pull image
In order to use a private docker registry with Kubernetes it's not enough to docker login.
You need to add a Kubernetes docker-registry Secret with your credentials as described here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/. Also in that article is imagePullSecrets setting you have to add to your yaml deployment file, referencing that secret.
I just fixed this on my machine, kubectl v1.9.0 failed to create the secret properly. Upgrading to v1.9.1, deleting the secret, recreating it resolved the issue for me. https://github.com/kubernetes/kubernetes/issues/57427