I am trying to connect to my MinIO bucket from an argo workflow. However, my pod fails to initialise with the following error:
MountVolume.SetUp failed for volume "argo-artifacts" : secret "argo-artifacts" not found<
But if i run:
kubectl get secrets
It does list argo-artifacts
How can i make my workflow run succesfully connected to the bucket.
(This is what I am trying to get working: https://argoproj.github.io/argo-workflows/walk-through/artifacts/)
It was because the secret was in a different namespace. Creating the secret in the same namespace as the pods did resolve the issue.
Related
In Kubernetes container repository I have my permission set to Private:
When I create a pod on my cluster I get the the pod status ending in ImagePullBackOff and when I describe the pod I see:
Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
I am certainly logged in.
docker login
Authenticating with existing credentials...
Login Succeeded
Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.
If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.
If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., roles/artifactregistry.reader)
A step by step tutorial, with all the different cases, it is present in the official documentation: https://cloud.google.com/artifact-registry/docs/access-control#gcp
To use gcr.io or any other private artifact registry, you'll need to create a Secret of type docker-registry in k8s cluster. The secret will contain credential details of your registry:
kubectl create secret docker-registry <secret-name> \
--docker-server=<server-name> \
--docker-username=<user-name> \
--docker-password=<user-password> \
--docker-email=<user-email-id>
After this, you will need to specify the above secret in imagePullSecrets property of your manifest so that k8s able to authenticate and pull the image.
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: pod1
image: gcr.io/pod1:latest
imagePullSecrets:
- name: myregistrykey
Check out this tutorial from container-solutions and official k8s doc.
GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to https://www.googleapis.com/auth/devstorage.read_only as well.
I'm trying to deploy the pod that I've already created as a service but I keep getting the aforementioned error.
The first error is because I had already deployed the pods the other day. But the second error is the main problem.
It would be great if anyone could help me out.
kubectl run ...
is used to create and run a particular image in a pod. [reference]
kubectl expose ...
is used to expose a resource (pod, service, replicationcontroller, deployment, replicaset) as a new k8s service. [reference]
What you are doing is create a pod with kubectl run and expose a deployment with kubectl expose deployment. Those are two different resources. That's why you are getting NotFound error - because specified deployment does not exist.
What you can do is either
kubectl expose pod ...
or create a deployment.
I've set up a private container registry that it is integrated with bitbucket successfully. However, I am not able to pull the images from my GKE Cluster.
I created a service account with the role "Project Viewer", and a json key for this account. Then I created the secret in the cluster/namespace running
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/code/bitbucket/miappsrl/miappnodeapi/secrets/registry/miapp-staging-e94050365be1.json)" \
--docker-email=agusmiappgcp#gmail.com
And in the deployment file I added
...
imagePullSecrets:
- name: gcr-json-key
...
But when I apply the deployment I get
ImagePullBackOff
And when I do a kubectl describe pod <pod_name> I see
Failed to pull image "gcr.io/miapp-staging/miappnodeapi": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 169.254.169.254:53: no such host
I can't realize what I am missing, I understand it can resolve the dns inside the cluster, but not sure what I should add
If a GKE Cluster is setup as private you need to setup the DNS to reach container Registry, from documentation:
To support GKE private clusters that use Container Registry or Artifact Registry inside a service perimeter, you first need to configure your DNS server so requests to registry addresses resolve to restricted.googleapis.com, the restricted VIP. You can do so using Cloud DNS private DNS zones.
Verify if you setup your cluster as private.
I am dealing with some issues on Kubernetes on Azure (AKS) using Autoscaler and secrets for pulling images from Docker Hub.
I created the secret in my applications namespace while having 3 nodes enabled (initial cluster status).
kubectl create secret docker-registry mysecret --docker-server=https://index.docker.io/v1/ --docker-username=<docker_id> --docker-password=<docker_password> -n mynamespace
I deploy my application using the imagePullSecrets option after specifying images URL.
imagePullSecrets:
- name: mysecret
After deploying the application I created the autoscaler rule.
kubectl autoscale deployment mydeployment --cpu-percent=50 --min=1 --max=20 -n mynamespace
All new pods pull the image correctly. However at some point when new Kubernetes node is being automatically deployed, all new pods requiring the DockerHub based image can not start.
Failed to pull image "mydocherhubaccount/myimage:mytag": rpc error: code = Unknown desc = Error response from daemon: pull access denied for mydocherhubaccount/myimage:mytag, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Is there anything I am missing here? I waited for 15 minutes and recreated pods but it did not help.
I use Kubernetes 1.15.5 on Azure AKS. The cluster was created using the following command.
az aks create -g myresourcegroup -n mynamespace --location eastus --kubernetes-version 1.15.5 --node-count 3 --node-osdisk-size 100 --node-vm-size Standard_D4_v3 --enable-vmss --enable-cluster-autoscaler --min-count 3 --max-count 5
I appreciate any help provided. It really got me stuck here.
I've used the following command to update the image run in a deployment:
kubectl --cluster websites --namespace production set image
deployment/mobile-web mobile-web=eu.gcr.io/websites/mobile-web:0.23
This worked well until I created a staging namespace mirroring the production environment. In other words the deployment mobile-web exists both in the production and staging namespace. Now I get the error:
Error from server: the server could not find the requested resource
(get deployments.extensions mobile-web)
What am I missing here? Or is the only way to update using a yaml- or JSON-file, which means a bit more work on the CI/CD pipeline? I've tried setting the namespace with:
kubectl config set-context production --namespace=production --cluster=websites
but to no avail.
The solution for my concern was to kill the current proxy and get new credentials and start the proxy again:
gcloud container clusters get-credentials websites
kubectl proxy --port=8080
Now either commands work as expected:
kubectl get deployment mobile-web --namespace=production
kubectl get deployment mobile-web --namespace=staging
However it doesn't explain why it stopped working in the first place.