How do I grant permission to my Kubernetes cluster to pull images from gcr.io? - kubernetes

In Kubernetes container repository I have my permission set to Private:
When I create a pod on my cluster I get the the pod status ending in ImagePullBackOff and when I describe the pod I see:
Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
I am certainly logged in.
docker login
Authenticating with existing credentials...
Login Succeeded
Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.

If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.
If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., roles/artifactregistry.reader)
A step by step tutorial, with all the different cases, it is present in the official documentation: https://cloud.google.com/artifact-registry/docs/access-control#gcp

To use gcr.io or any other private artifact registry, you'll need to create a Secret of type docker-registry in k8s cluster. The secret will contain credential details of your registry:
kubectl create secret docker-registry <secret-name> \
--docker-server=<server-name> \
--docker-username=<user-name> \
--docker-password=<user-password> \
--docker-email=<user-email-id>
After this, you will need to specify the above secret in imagePullSecrets property of your manifest so that k8s able to authenticate and pull the image.
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: pod1
image: gcr.io/pod1:latest
imagePullSecrets:
- name: myregistrykey
Check out this tutorial from container-solutions and official k8s doc.

GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to https://www.googleapis.com/auth/devstorage.read_only as well.

Related

Pulling image from private container registry (Harbor) in Kubernetes

I am using Harbor (https://goharbor.io/) for private container registry. I run the Harbor using docker compose, and it is working fine. I can push/ pull images to this private registry using a VM. I already used 'docker login' command to login into this Harbor repository.
For Kubernetes, I am using k3s.
Now, I want to create a pod in Kubernetes using the image in this Harbor private repository. I referred to Harbor & Kubernetes documentations (https://goharbor.io/docs/1.10/working-with-projects/working-with-images/pulling-pushing-images/) & (https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) to pull the image.
As mentioned in Harbor documentation:
Kubernetes users can easily deploy pods with images stored in Harbor.
The settings are similar to those of any other private registry. There
are two issues to be aware of:
When your Harbor instance is hosting HTTP and the certificate is
self-signed, you must modify daemon.json on each work node of your
cluster. For information, see
https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry.
If your pod references an image under a private project, you must
create a secret with the credentials of a user who has permission to
pull images from the project. For information, see
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
I created the daemon.json file in /etc/docker:
{
"insecure-registries" : "my-harbor-server:443"
}
As mentioned in Kubernetes documentation, I created the Secret using this command:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Then I used a file called pod.yml to create pod (using kubectl apply -f pod.yml):
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: my-harbor-server/my-project/mayapp:v1.0
imagePullSecrets:
- name: regcred
However, when I checked the pod status, it is showing 'ImagePullBackOff'. The pod logs shows:
Error from server (BadRequest): container "myapp" in pod "myapp" is waiting to start: trying and failing to pull image
Is there any other step that I have to do to pull this image from Harbor private repository into Kubernetes? What is the reason that I cannot pull this image from Harbor private repository into Kubernetes?
The /etc/docker/daemon.json file configures the docker engine. If your CRI is not the docker shim, them this file will not apply to Kubernetes. For k3s, that is configured using /etc/rancher/k3s/registries.yaml. See https://rancher.com/docs/k3s/latest/en/installation/private-registry/ for details on configuring this file. It needs to be performed on each host.

GKE pulling images from a private repository inside GCP

I've set up a private container registry that it is integrated with bitbucket successfully. However, I am not able to pull the images from my GKE Cluster.
I created a service account with the role "Project Viewer", and a json key for this account. Then I created the secret in the cluster/namespace running
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/code/bitbucket/miappsrl/miappnodeapi/secrets/registry/miapp-staging-e94050365be1.json)" \
--docker-email=agusmiappgcp#gmail.com
And in the deployment file I added
...
imagePullSecrets:
- name: gcr-json-key
...
But when I apply the deployment I get
ImagePullBackOff
And when I do a kubectl describe pod <pod_name> I see
Failed to pull image "gcr.io/miapp-staging/miappnodeapi": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 169.254.169.254:53: no such host
I can't realize what I am missing, I understand it can resolve the dns inside the cluster, but not sure what I should add
If a GKE Cluster is setup as private you need to setup the DNS to reach container Registry, from documentation:
To support GKE private clusters that use Container Registry or Artifact Registry inside a service perimeter, you first need to configure your DNS server so requests to registry addresses resolve to restricted.googleapis.com, the restricted VIP. You can do so using Cloud DNS private DNS zones.
Verify if you setup your cluster as private.

Can't modify(patch) docker image id of existing kubernetes deployment definition on eks(aws)

I created CodePipeline definition on aws. On the beginning I build docker image and send it to ecr (container registry on aws). When the docker image has been sent to the registry I call lambda function that should update definition of existing deployment by replacing docker image id in that deployment definition. Lambda function is implemented using nodejs, takes recently sent image id and is trying to patch deployment definition. When it's trying to patch the deployment I receive a response like below.
body: {
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'deployments.apps "arch-app" is forbidden:
User "system:serviceaccount:arch-user:default" cannot patch resource "deployments"
in API group "apps" in the namespace "arch-ns"',
reason: 'Forbidden',
details: [Object],
code: 403
}
This user account belongs to aws iam and I used it to create test cluster with kubernetes so it's owner of the cluster. Any operation on the cluster I do I do it using this account and it works fine (I can create resources and apply changes on them without any problems using this account).
I created additional role in this namespace and role binding for the aws user account I use but it didn't resolve the issue (and probably was redundant). Lambda function has full permissions to all resources on ecr and eks.
Does/did anybody have similar issue with such deployment patching on eks using lambda function?
You can check if the service account has RBAC to patch deployment in namespace arch-ns
kubectl auth can-i patch deployment --as=system:serviceaccount:arch-user:default -n arch-ns
If the above command returns no then add necessary role and rolebinding to the service account.
One thing to notice here is that its a default service account in arch-user namespace but trying to perform operation in a different namespace arch-ns

How to restrict default Service account from creating/deleting kubernetes resources

I am using Google cloud's GKE for my kubernetes operations.
I am trying to restrict access to the users that access the clusters using command line. I have applied IAM roles in Google cloud and given view role to the Service accounts and users. It all works fine if we use it through api or "--as " in kubectl commands but when someone tries to do a kubectl create an object without specifying "--as" object still gets created with "default" service account of that particular namespace.
To overcome this problem we gave restricted access to "default" service account but still we were able to create objects.
$ kubectl auth can-i create deploy --as default -n test-rbac
no
$ kubectl run nginx-test-24 -n test-rbac --image=nginx
deployment.apps "nginx-test-24" created
$ kubectl describe rolebinding default-view -n test-rbac
Name: default-view
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: view
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default test-rbac
I expect users who are accessing cluster through CLI should not be able to create objects if they dont have permisssions, even if they dont use "--as" flag they should be restricted.
Please take in count that first you need to review the prerequisites to use RBAC in GKE
Also, please note that IAM roles applies to the entire Google Cloud project and all clusters within that project and RBAC enables fine grained authorization at a namespace level. So, with GKE these approaches to authorization work in parallel.
For more references, please take a look on this document RBAC in GKE
For all the haters of this question, I wish you could've tried pointing to this:
there is a file at:
~/.config/gcloud/configurations/config_default
in this there is a option under [container] section:
use_application_default_credentials
set to true
Here you go , you learnt something new.. enjoy. Wish you could have tried helping instead of down-voting.

minikube and azure container registry

I am trying to get images in minikube from the Azure container registry. This keeps failing because not it says unauthorized.
unauthorized: authentication required
I used kubectl create secret to add the credentials for the registry but it keeps failing.
what I tried so far:
I added the URL with and without https
I added the admin user and made a new service principle
I tried to add the secret to the default service account in hope something was wrong with the yaml
used minikube ssh to see if could use docker login and docker pull (that worked).
Getting bit desperate what I can try next? how can I troubleshoot this better?
The kubectl create secret command should have produced a ~/.dockercfg file, which is used to authenticate with the registry for subsequent docker push and docker pull requests.
I suspect you may have created your secret in the wrong namespace, given that your docker login and docker pull commands worked.
Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.
https://kubernetes.io/docs/concepts/containers/images/#using-azure-container-registry-acr
You can create a secret and save it at your kubernetes context (in your case minikube context).
Firstly be sure that you are pointing to the right context by running kubectl config get-contexts and kubectl config use-context minikube to change to minikube context.
Create a secret in minikube context by running kubectl create secret docker-registry acr-secret --docker-server=<your acr server>.azurecr.io --docker-username=<your acr username> --docker-password=<your acr password>. This command generates acr-secret.
Import the acr-secret into deployment yml files by using the imagePullSecrets inside each template.spec tag:
imagePullSecrets:
- name: acr-secret
I think there is a much simpler answer. First you need to install the Azure CLI and login
az login
After that, you can get the credentials for your Azure Container Registry
az acr login -n yoursupercoolregistry.azurecr.io --expose-token
This will give you something like that:
{
"accessToken": "averylongtoken",
"loginServer": "yoursupercoolregistry.azurecr.io"
}
With this information, you can enable registry-creds and configure a Docker Registry (and not a Azure Container Registry)
minikube addons enable registry-creds
minikube addons configure registry-creds
You have to enter your server URL and 00000000-0000-0000-0000-000000000000 as client ID.
Do you want to enable Docker Registry? [y/n]: y
-- Enter docker registry server url: yoursupercoolregistry.azurecr.io
-- Enter docker registry username: 00000000-0000-0000-0000-000000000000
And at the end paste the token in the command line, hit enter and you should get this
✅ registry-creds was successfully configured
And then add imagePullSecrets to your file
spec:
containers:
- name: supercoolcontainer
image: yoursupercoolregistry.azurecr.io/supercoolimage:1.0.2
imagePullSecrets:
- name: dpr-secret