I've set up a private container registry that it is integrated with bitbucket successfully. However, I am not able to pull the images from my GKE Cluster.
I created a service account with the role "Project Viewer", and a json key for this account. Then I created the secret in the cluster/namespace running
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/code/bitbucket/miappsrl/miappnodeapi/secrets/registry/miapp-staging-e94050365be1.json)" \
--docker-email=agusmiappgcp#gmail.com
And in the deployment file I added
...
imagePullSecrets:
- name: gcr-json-key
...
But when I apply the deployment I get
ImagePullBackOff
And when I do a kubectl describe pod <pod_name> I see
Failed to pull image "gcr.io/miapp-staging/miappnodeapi": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 169.254.169.254:53: no such host
I can't realize what I am missing, I understand it can resolve the dns inside the cluster, but not sure what I should add
If a GKE Cluster is setup as private you need to setup the DNS to reach container Registry, from documentation:
To support GKE private clusters that use Container Registry or Artifact Registry inside a service perimeter, you first need to configure your DNS server so requests to registry addresses resolve to restricted.googleapis.com, the restricted VIP. You can do so using Cloud DNS private DNS zones.
Verify if you setup your cluster as private.
Related
In Kubernetes container repository I have my permission set to Private:
When I create a pod on my cluster I get the the pod status ending in ImagePullBackOff and when I describe the pod I see:
Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
I am certainly logged in.
docker login
Authenticating with existing credentials...
Login Succeeded
Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.
If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.
If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., roles/artifactregistry.reader)
A step by step tutorial, with all the different cases, it is present in the official documentation: https://cloud.google.com/artifact-registry/docs/access-control#gcp
To use gcr.io or any other private artifact registry, you'll need to create a Secret of type docker-registry in k8s cluster. The secret will contain credential details of your registry:
kubectl create secret docker-registry <secret-name> \
--docker-server=<server-name> \
--docker-username=<user-name> \
--docker-password=<user-password> \
--docker-email=<user-email-id>
After this, you will need to specify the above secret in imagePullSecrets property of your manifest so that k8s able to authenticate and pull the image.
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: pod1
image: gcr.io/pod1:latest
imagePullSecrets:
- name: myregistrykey
Check out this tutorial from container-solutions and official k8s doc.
GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to https://www.googleapis.com/auth/devstorage.read_only as well.
Google managed prometheus seems like a great service however at the moment it does not work even in the example... https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed
Setup:
create a new autopilot cluster 1.21.12-gke.2200
enable manage prometheus via gcloud cli command
gcloud beta container clusters update <mycluster> --enable-managed-prometheus --region us-central1
add port 8443 firewall webhook command
install ingress-nginx
try and use the PodMonitoring manifest to get metrics from ingress-nginx
Error from server (InternalError): error when creating "ingress-nginx/metrics.yaml": Internal error occurred: failed calling webhook "default.podmonitorings.gmp-operator.gke-gmp-system.monitoring.googleapis.com": Post "https://gmp-operator.gke-gmp-system.svc:443/default/monitoring.googleapis.com/v1/podmonitorings?timeout=10s": x509: certificate is valid for gmp-operator, gmp-operator.gmp-system, gmp-operator.gmp-system.svc, not gmp-operator.gke-gmp-system.svc
There is a thread suggesting this will all be fixed this week (8/11/2022), https://github.com/GoogleCloudPlatform/prometheus-engine/issues/300, but it seems like this should work regardless.
if I try to port forward ...
kubectl -n gke-gmp-system port-forward svc/gmp-operator 8443
error: Pod 'gmp-operator-67d5fff8b9-p4n7t' does not have a named port 'webhook'
I am using Harbor (https://goharbor.io/) for private container registry. I run the Harbor using docker compose, and it is working fine. I can push/ pull images to this private registry using a VM. I already used 'docker login' command to login into this Harbor repository.
For Kubernetes, I am using k3s.
Now, I want to create a pod in Kubernetes using the image in this Harbor private repository. I referred to Harbor & Kubernetes documentations (https://goharbor.io/docs/1.10/working-with-projects/working-with-images/pulling-pushing-images/) & (https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) to pull the image.
As mentioned in Harbor documentation:
Kubernetes users can easily deploy pods with images stored in Harbor.
The settings are similar to those of any other private registry. There
are two issues to be aware of:
When your Harbor instance is hosting HTTP and the certificate is
self-signed, you must modify daemon.json on each work node of your
cluster. For information, see
https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry.
If your pod references an image under a private project, you must
create a secret with the credentials of a user who has permission to
pull images from the project. For information, see
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
I created the daemon.json file in /etc/docker:
{
"insecure-registries" : "my-harbor-server:443"
}
As mentioned in Kubernetes documentation, I created the Secret using this command:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Then I used a file called pod.yml to create pod (using kubectl apply -f pod.yml):
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: my-harbor-server/my-project/mayapp:v1.0
imagePullSecrets:
- name: regcred
However, when I checked the pod status, it is showing 'ImagePullBackOff'. The pod logs shows:
Error from server (BadRequest): container "myapp" in pod "myapp" is waiting to start: trying and failing to pull image
Is there any other step that I have to do to pull this image from Harbor private repository into Kubernetes? What is the reason that I cannot pull this image from Harbor private repository into Kubernetes?
The /etc/docker/daemon.json file configures the docker engine. If your CRI is not the docker shim, them this file will not apply to Kubernetes. For k3s, that is configured using /etc/rancher/k3s/registries.yaml. See https://rancher.com/docs/k3s/latest/en/installation/private-registry/ for details on configuring this file. It needs to be performed on each host.
I am dealing with some issues on Kubernetes on Azure (AKS) using Autoscaler and secrets for pulling images from Docker Hub.
I created the secret in my applications namespace while having 3 nodes enabled (initial cluster status).
kubectl create secret docker-registry mysecret --docker-server=https://index.docker.io/v1/ --docker-username=<docker_id> --docker-password=<docker_password> -n mynamespace
I deploy my application using the imagePullSecrets option after specifying images URL.
imagePullSecrets:
- name: mysecret
After deploying the application I created the autoscaler rule.
kubectl autoscale deployment mydeployment --cpu-percent=50 --min=1 --max=20 -n mynamespace
All new pods pull the image correctly. However at some point when new Kubernetes node is being automatically deployed, all new pods requiring the DockerHub based image can not start.
Failed to pull image "mydocherhubaccount/myimage:mytag": rpc error: code = Unknown desc = Error response from daemon: pull access denied for mydocherhubaccount/myimage:mytag, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Is there anything I am missing here? I waited for 15 minutes and recreated pods but it did not help.
I use Kubernetes 1.15.5 on Azure AKS. The cluster was created using the following command.
az aks create -g myresourcegroup -n mynamespace --location eastus --kubernetes-version 1.15.5 --node-count 3 --node-osdisk-size 100 --node-vm-size Standard_D4_v3 --enable-vmss --enable-cluster-autoscaler --min-count 3 --max-count 5
I appreciate any help provided. It really got me stuck here.
Environment of kubectl: Windows 10.
Kubectl version: https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe
Hello. I've just installed Kubernetes cluster at Google Cloud Platform. Then applied the next command:
gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project
It successfully added the credentials at %UserProfile%\.kube\config
But when I try kubectl get pods it returns Unable to connect to the server: EOF. My computer accesses the internet through corporate proxy. How and where could I provide cert file for the kubectl so it could use the cert with all the requests? Thanx.
You would get EOF if there is no response from kubectl API calls in a certain time(Idle time is set 300 sec by default).
Try increasing cluster Idle time or maybe you might need VPN to access those pods (something like those)