GCR using CLI to retrieve tagless images - gcloud

I know you can do the following to retrieve a list of tags of a specific image:
gcloud container images list --repository=gcr.io/myproject
But I was wondering whether I can also use the gcloud CLI to retrieve the images without a tag.
The tag-less images are shown in the Google cloud console web interface.
Solution
gcloud container images list-tags gcr.io/myproject/repo --filter='-tags:*'

list-tags would be better for your needs. Specifically, if you want to see information on all images (including untagged ones):
gcloud container images list-tags gcr.io/project-id/repository --format=json
And if you want to print the digests of images which are untagged:
gcloud container images list-tags gcr.io/project-id/repository --filter='-tags:*' --format='get(digest)' --limit=$BIG_NUMBER

I think what you are looking for is list-tags sub-command:
gcloud container images list-tags --repository=gcr.io/myproject/myrepo

Related

What is the difference between `minikube cache list` and `minikube image list`

These two commands appear to list the images which are available in minikube. Obviously one refers to images that are 'in cache' and the other does not however it is not clear where the images being listed by minikube image list reside if not in the cache, or how images could be in the list returned by minikube image list but not be returned by minikube cache list.
The differences between them are that image list shows all images in cluster and cache list shows images which you added manually. You can read more about Offline usage and about Pushing images
minikube start caches all required Kubernetes images by default. This default may be changed by setting --cache-images=false. These images are not displayed by the minikube cache command.
To add Docker image to Minikube you can use:
minikube image load <name-of-docker-image>
Here is an example:
user#minikube:~/myproject$ minikube cache list
user#minikube:~/myproject$ minikube image load helloapp:v1
user#minikube:~/myproject$ minikube cache list
helloapp:v1
user#minikube:~/myproject$

How to use local docker images in kubernetes deployments (NOT minikube)

I have a VM with kubernetes installed using kubeadm (NOT minikube). The VM acts a the single node of the cluster, with taints removed to allow it to act as both Master and Worker node (as shown in the kubernetes documentation).
I have saved, transfered and loaded a my app:test image into it. I can easily run a container with it using docker run.
It shows up when I run sudo docker images.
When I create a deployment/pod that uses this image and specify Image-PullPolicy: IfNotPresent or Never, I still have the ImagePullBackoff error. The describe command shows me it tries to pull the image from dockerhub...
Note that when I try to use a local image that was pulled as the result of creating another pod, the ImagePullPolicies seem to work, no problem. Although the image doesn't appear when i run sudo docker images --all.
How can I use a local image for pods in kubernetes? Is there a way to do it without using a private repository?
image doesn't appear when i run sudo docker images --all
Based on your comment, you are using K8s v1.22, which means it is likely your cluster is using containerd container runtime instead of docker (you can check with kubectl get nodes -o wide, and see the last column).
Try listing your images with crictl images and pulling with crictl pull <image_name> to preload the images on the node.
One can do so with a combination of crictl and ctr, if using containerd.
TLDR: these steps, which are also described in the crictl github documentation:
1- Once you get the image on the node (in my case, a VM), make sure it is in an archive (.tar). You can do that with the docker save or ctr image export commands.
2- Use sudo ctr -n=k8s.io images import myimage.tar while in the same directory as thearchived image to add it to containerd in the namespace that kubernetes uses to track it's images. It should now appear when you run sudo crictl images.
As suggested, I tried listing images with crictl and my app:test did not appear. However, trying to import my local image through crictl didn't seem to work either. I used crictl pull app:test and it showed the following error message:
FATA[0000] pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/app:test": failed to resolve reference "docker.io/library/app:test": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed.
However, when following these steps, my image is finally recognized as an existing local image in kubernetes. They are actually the same as suggested in the crictl github documentation
How does one explain this? How do images get "registered" in the kubernetes cluster? Why couldn't crictl import the image? I might post another issue to ask that...
Your cluster is bottled inside of your VM, so what you call local will always be remote for that cluster in that VM. And the reason that kubernetes is trying to pull those images, is because it can't find them in the VM.
Dockerhub is the default place to download containers from, but you can set kubernetes to pull from aws (ECR) from azure (ACR), from github packages (GCR) and from your own private server.
You've got about 100 ways to solve this, none of them are easy or will just work.
1 - easiest, push your images to Dockerhub and let your cluster pull from it.
2 - setup a local private container registry and set your kubernetes VM to pull from it (see this)
3 - setup a private container registry in your kubernetes cluster and setup scripts in your local env to push to it (see this)

List only private Dataproc custom images

I have created a custom dataproc image using this command ...
$ python generate_custom_image.py --image-name my-ubuntu18-custom --dataproc-version 1.5-ubuntu18 --customization-script my-customization-script.sh --zone us-central-1 --gcs-bucket gs://dataproc-xxxxxx-imgs
After creation, I tried to list all the custom dataproc images created by me and was surprised to see 83 images. Mine was showing up alongside 82 other images. I expected to see only mine. How to ensure mine is not in the public list of dataproc images?
gcloud will by default list private images in your default project alongside a standard list of "public image projects", as listed under the gcloud list help page; you can make it not list public images, but only your project-level private images with the flag --no-standard-images:
gcloud compute images list --no-standard-images
Another way to see the difference is if you have two GCP projects, and you gcloud config set project my-other-project and then try a default gcloud compute images list again, you shouldn't expect to see the custom image you created.
Finally, you can also use:
gcloud compute images describe my-ubuntu18-custom
to see the full resource name of the image along with other metadata, showing that it is nested in your project, and also gcloud compute images get-iam-policy:
gcloud compute images get-iam-policy my-ubuntu18-custom
to assure yourself that the permissions of the custom image are not public.

How do I update a service in the cluster to use a new docker image

I have created a new docker image that I want to use to replace the current docker image. The application is on the kubernetes engine on google cloud platform.
I believe I am supposed to use the gcloud container clusters update command. Although, I struggle to see how it works and how I'm supposed to replace the old docker image with the new one.
You may want to use kubectl in order to interact with your GKE cluster. Method of image update depends on how the Pod / Container was created.
For some example commands, see https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources
For example, kubectl set image deployment/frontend www=image:v2 will do a rolling update "www" containers of "frontend" deployment, updating the image.
Getting up and running on GKE: https://cloud.google.com/kubernetes-engine/docs/quickstart
You can use Container Registry[1] as a single place to manage Docker images.
Google Container Registry provides secure, private Docker repository storage on Google Cloud Platform. You can use gcloud to push[2] images to your registry, then you can pull images using an HTTP endpoint from any machine.
You can also use Docker Hub repositories[3] allow you share container images with your team, customers, or the Docker community at large.
[1]https://cloud.google.com/container-registry/
[2]https://cloud.google.com/container-registry/docs/pushing-and-pulling
[3]https://docs.docker.com/docker-hub/repos/

Unable to create Dataproc cluster using custom image

I am able to create a google dataproc cluster from the command line using a custom image:
gcloud beta dataproc clusters create cluster-name --image=custom-image-name
as specified in https://cloud.google.com/dataproc/docs/guides/dataproc-images, but I am unable to find information about how to do the same using the v1beta2 REST api in order to create a cluster from within airflow. Any help would be greatly appreciated.
Since custom images can theoretically reside in a different project if you grant read/use access of that custom image to whatever project service account you use for the Dataproc cluster, images currently always need a full URI, not just a short name.
When you use gcloud, there's syntactic sugar where gcloud will resolve the full URI automatically; you can see this in action if you use --log-http with your gcloud command:
gcloud beta dataproc clusters create foo --image=custom-image-name --log-http
If you created one with gcloud you can also gcloud dataproc clusters describe your cluster to see the fully-resolved custom image URI.