Kubernetes deploying a template with multiple pods with images from DockerHub with blocked DockerHub access - kubernetes

I'm deploying a system which uses many images from DockerHub and other public registries(eg registry.redhat.io). Our k8s doesn't have access to Docker Hub or others. We use AKS & ACR.
The template has multiple references like this
- name: some-api
image: some-vendor/some-api:2.3.2
If I import image some-vendor/some-api:2.3.2 from DockerHub into ACR and change template to use full image name like my-acr-registry.azurecr.io/some-vendor/some-api:2.3.2 then it will pull the image ok, but I'd like avoid touching template. I'd like to do few changes in a namespaces or in the cluster so that it will attempt to find them in ACR.
Is there a way to make AKS to search that image in our ACR without specifying full image name? I'd like to avoid touching the template.
Another example, template also uses some other images from other repositories, eg registry.redhat.io/ubi8/nodejs-14. Is it possible to make it so that it will find it in our ACR if I push it using name my-acr-registry.azurecr.io/registry.redhat.io/ubi8/nodejs-14 without updating template? Is it possible to do that?
Ideally I'd like to add some sort of image resolver, which attempts to pull it as concat("my-acr-registry.azurecr.io/docker.io/", imageName), then if not able to find try to pull it as concat("my-acr-registry.azurecr.io/", imageName) to cater for both cases. Can I do that?

Related

Configure knative serving to use the local Docker image

How do we configure the knative serving to use the local docker image?
If we need to pull the image from the Google Cloud registry, we append "gcr.io/" as a prefix to the image name. Similarly, is there any prefix to be used for the local image?
I'm facing the below error when I use hello:latest as image name and Never as imagePullPolicy
Revision "hello1-ld62x" failed with message: Unable to fetch image "hello:latest"
But the same image is working fine if I use the Kubernetes Deployment resource.
Try add docker.io in registries-skipping-tag-resolving. How to add check here:https://knative.dev/docs/serving/configuration/deployment/

Kubernetes - how to specify image pull secrets, when creating deployment with image from remote private repo

I need to create and deploy on an existing Kubernetes cluster, an application based on a docker image which is hosted on private Harbor repo on a remote server.
I could use this if the repo was public:
kubectl create deployment <deployment_name> --image=<full_path_to_remote_repo>:<tag>
Since the repo is private, the username, password etc. are required for it to be pulled. How do I modify the above command to embed that information?
Thanks in advance.
P.S.
I'm looking for a way that doesn't involve creating a secret using kubectl create secret and then creating a yaml defining the deployment.
The goal is to have kubectl pull the image using the supplied creds and deploy it on the cluster without any other steps. Could this be achieved with a single (above) command?
Edit:
Creating and using a secret is acceptable if there was a way to specify the secret as an option in kubectl command rather than specify it in a yaml (really trying to avoid yaml). Is there a way of doing that?
There are no flags to pass an imagePullSecret to kubectl create deployment, unfortunately.
If you're coming from the world of Docker Compose or Swarm, having one line deployments is fairly common. But even these deployment tools use underlying configuration and .yml files, like docker-compose.yml.
For Kubernetes, there is official documentation on pulling images from private registries, and there is even special handling for docker registries. Check out the article on creating Docker config secrets too.
According to the docs, you must define a secret in this way to make it available to your cluster. Because Kubernetes is built for resiliency/scalability, any machine in your cluster may have to pull your private image, and therefore each machine needs access to your secret. That's why it's treated as its own entity, with its own manifest and YAML file.

Best practices for storing images locally to avoid problems when the image source is down (security too)?

I'm using argocd and helm charts to deploy multiple applications in a cluster. My cluster happens to be on bare metal, but I don't think that matters for this question. Also, sorry, this is probably a pretty basic question.
I ran into a problem yesterday where one of the remote image sources used by one of my helm charts was down. This brought me to a halt because I couldn't stand up one of the main services for my cluster without that image and I didn't have a local copy of it.
So, my question is, what would you consider to be best practice for storing images locally to avoid this kind of problem? Can I store charts and images locally once I've pulled them for the first time so that I don't have to always rely on third parties? Is there a way to set up a pass-through cache for helm charts and docker images?
If your scheduled pods were unable to start on a specific node with an Failed to pull image "your.docker.repo/image" error, you should consider having these images already downloaded on the nodes.
Think of how you can docker pull the images on your nodes. It may be a linux cronjob, kubernetes operator or any other solution that will ensure presence of docker image on the node even if you have connectivity issues.
As one of the options:
Create your own helm chart repository to store helm charts locally (optionally)
Create local image registry and push there needed images, also tag them accordingly for future simplicity
On each node add insecure registry by editing /etc/docker/daemon.json and adding
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
restart docker service on each node to apply changes
change your helm charts templates, set proper image path from local repo
recreate chart with new properties, (optionally)push chart to created in step 1 local helm repo
FInally install the chart - this time it should pick up images from local repo.
You may also be interested in Kubernetes-Helm Charts pointing to a local docker image

How to update kubernetes deployment without updating image

Background.
We are using k8s 1.7. We use deployment.yml to maintain/update k8s cluster state. In deployment.yml, pod's image is set to ${some_image}:latest. Once deployment is created, pod's image will update to ${some_image}:${build_num}, whenever there is code merge into master.
What happen now is, let's say if we need to modified the resource limited in deployment.yml and re-apply it. The image of deployment will be updated to ${some_image} :latest as well. We want to keep the image as it is in cluster state, without maintaining the actual tag in deployment.yml. We know that the replcas can be omitted in file, and it takes the value from cluster state by default.
Question,
On 1.7, the spec.template.spec.containers[0].image is required.
Is it possible to apply deployment.yml without updating the image to ${some_image}:latest as well (an argument like --ignore-image-change, or a specific field in deployment.yml)? If so, how?
Also, I see the image is optional in 1.10 documentation.
Is it true? if so, since which version?
--- Updates ---
CI build and deploy new image on every merge into master. At deploy, CI run the command kubectl set image deployment/app container=${some_image}:${build_num} where ${build_num} is the build number of the pipeline.
To apply deployment.yml, we run kubectl apply -f deployment.yml
However, in deployment.yml file, we specified the latest tag of the image, because it is impossible to keep this field up-to-date
Using “:latest” tag is against best practices in Kubernetes deployments for a number of reasons - rollback and versioning being some of them. To properly resolve this you should maybe rethink you CI/CD pipeline approach. We use ci-pipeline or ci-job version to tag images for example.
Is it possible to update deployment without updating the image to the file specified. If so, how?
To update pod without changing the image you have some options, each with some constraints, and they all require some Ops gymnastics and introduce additional points of failure since it goes against recommended approach.
k8s can pull the image from your remote registry (you must keep track of hashes since your latest is out of your direct control - potential issues here). You can check used hash on local docker registry of a node that pod is running from.
k8s can pull the image from local node registry (you must ensure that on all potential nodes for running pods at “:latest” is on the same page in local registry for this to work - potential issues here). Once there, you can play with container’s imagePullPolicy such that when CI tool is deploying - it uses apply of yaml (in contrast to create) and sets image policu to Always, immediately folowing by apply of image policy of Never (also potential issue here), restricting pulling policy to already pulled image to local repository (as mentioned, potential issues here as well).
Here is an excerpt from documentation about this approach: By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
more about how k8s is handling images and why latest tagging can bite back is given here: https://kubernetes.io/docs/concepts/containers/images/
In case you don't want to deal with complex syntax in deployment.yaml in CI, you have the option to use a template processor. For example mustache. It would change the CI process a little bit:
update image version in template config (env1.yaml)
generate deployment.yaml from template deployment.mustache and env1.yaml
$ mustache env1.yml deployment.mustache > deployment.yaml
apply configuration to cluster.
$ kubectl apply -f deployment.yaml
The main benefits:
env1.yaml always contains the latest master build image, so you are creating the deployment object using correct image.
env1.yaml is easy to update or generate at the CI step.
deployment.mustache stays immutable, and you are sure that all that could possibly change in the final deployment.yaml is an image version.
There are many other template rendering solutions in case mustache doesn't fit well in your CI.
Like Const above I highly recommend against using :latest in any docker image and instead use CI/CD to solve the version problem.
We have the same issue on the Jenkins X project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in pom.xml, package.json, Dockerfiles, helm charts etc.
We use a simple CLI tool called UpdateBot which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories
Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29
Are you aware of the repo.example.com/some-tag#sha256:... syntax for pulling images from docker registry? It is almost exactly designed to solve the problem you are describing.
updated from a comment:
You're solving the wrong problem; the file is only used to load content into the cluster -- from that moment forward, the authoritative copy of the metadata is in the cluster. The kubectl patch command can be a surgical way of changing some content without resorting to sed (or worse), but one should not try and maintain cluster state outside the cluster

docker has image, but kubernetes says not

I was trying to build with Kubernetes, and I was wanted to use the local image which I pulled in the early days to save time.(So it does not have to pull a image every time a container is created).The problem is that Kubernetes just ignored my local images.
For example. when I run docker images, I got this in the list
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 7 months ago 52.36 MB
But when I tried to build a deployment with the config(just a part of the config file)
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
imagePullPolicy: Never
It said Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1" is not present with pull policy of Never
I wonder if there is anything I missed. Thanks for help!
If you are using a Kubernetes cluster, you have to make sure the Docker image you need is available in all nodes, as you don't really know which one will get assigned the Pod.
So: pull/build the needed image from each cluster node.
If you look closely, you'll notice that the name of the image it's saying is missing is not the same as the one you pulled yourself. It looks like you're attempting to deploy kube-dns, which is made up of multiple images. You'll need to pull each one of them (or allow Kubernetes to pull them on your behalf) for the entire thing to work. Take a look at the configuration files you're using to deploy again—you should find other containers in the pod specifying the other images.