Unable to pull the image to run a pod in kubernetes - kubernetes

Image of Pod Detail
Failed to pull image
The image has been pushed in the azure registry through docker for windows.
image name provided: as provided during tag through docker

You currently provided very little detail.
Is your kubernetes cluster correctly configured to pull images from azure registry ? As far as I can see it isn't. Is this a managed AKS k8s cluster? If not, by default won't be able to access your private azure registry and will need to be configured with the credentials needed to access your private azure registry.
http://docs.heptio.com/content/private-registries/pr-docker-hub.html
Another possibility is you're pushing a Windows-based container onto linux based worker nodes which can only run linux based containers.

I only have experience with GKE but if you want to pull docker images from a repository that is not in the same project as the GKE cluster, you have to provide credentials for the image to be pulled.
I do this with a secret in Kubernetes that contains a .dockerconfig.json:
apiVersion: v1
data:
.dockerconfigjson: <REDACTED>
In order to create a secret of this type i've used a template as such:
kubectl create secret docker-registry <SECRET_NAME> \
--docker-server=https://gcr.io \
--docker-username=_json_key \
--docker-email=<SVC_ACCOUNT_EMAIL> \
--docker-password=<CONTENTS_OF_SVC_ACCOUNT_CREDS_FILE>
Once thats created you will need to mount to secret to the relevant pods / deployment. In the pod spec you will need:
imagePullSecrets:
- name: <SECRET_NAME>
(its a list because you can mount many secrets to pull from other places)
I imagine that azure has a similar setup whereby any images in the same project as the cluster will be able to be pulled. But any images hosted in another azure project or external image repo will need credentials.
I use the setup described to also pull google container registry images to a local minikube.
So I think we need to work out where your docker image is hosted and if credentials are needed to pull that image.

Related

Kubernetes - how to specify image pull secrets, when creating deployment with image from remote private repo

I need to create and deploy on an existing Kubernetes cluster, an application based on a docker image which is hosted on private Harbor repo on a remote server.
I could use this if the repo was public:
kubectl create deployment <deployment_name> --image=<full_path_to_remote_repo>:<tag>
Since the repo is private, the username, password etc. are required for it to be pulled. How do I modify the above command to embed that information?
Thanks in advance.
P.S.
I'm looking for a way that doesn't involve creating a secret using kubectl create secret and then creating a yaml defining the deployment.
The goal is to have kubectl pull the image using the supplied creds and deploy it on the cluster without any other steps. Could this be achieved with a single (above) command?
Edit:
Creating and using a secret is acceptable if there was a way to specify the secret as an option in kubectl command rather than specify it in a yaml (really trying to avoid yaml). Is there a way of doing that?
There are no flags to pass an imagePullSecret to kubectl create deployment, unfortunately.
If you're coming from the world of Docker Compose or Swarm, having one line deployments is fairly common. But even these deployment tools use underlying configuration and .yml files, like docker-compose.yml.
For Kubernetes, there is official documentation on pulling images from private registries, and there is even special handling for docker registries. Check out the article on creating Docker config secrets too.
According to the docs, you must define a secret in this way to make it available to your cluster. Because Kubernetes is built for resiliency/scalability, any machine in your cluster may have to pull your private image, and therefore each machine needs access to your secret. That's why it's treated as its own entity, with its own manifest and YAML file.

How to pull from private project's image registry using GitLab managed Kubernetes clusters

GitLab offers to manage a Kubernetes cluster, which includes (e.g.) creating the namespace, adding some tokens, etc. In GitLab CI jobs, one can directly use the $KUBECONFIG variable for contacting the cluster and e.g. creating deployments using helm. This works like a charm, as long as the GitLab project is public and therefore Docker images hosted by the GitLab project's image registry are publicly accessible.
However, when working with private projects, Kubernetes of course needs an ImagePullSecret to authenticate the GitLab's image registry to retreive the image. As far as I can see, GitLab does not automatically provide an ImagePullSecret for repository access.
Therefore, my question is: What is the best way to access the image repository of private GitLab repositories in a Kubernetes deployment in a GitLab managed deployment environment?
In my opinion, these are the possibilities and why they are not eligible/optimal:
Permanent ImagePullSecret provided by GitLab: When doing a deployment on a GitLab managed Kubernetes cluster, GitLab provides a list of variables to the deployment script (e.g. Helm Chart or kubectl apply -f manifest.yml). As far as I can (not) see, there is a lot of stuff like ServiceAccounts and tokens etc., but no ImagePullSecret - and also no configuration option for enabling ImagePullSecret creation.
Using $CI_JOB_TOKEN: When working with GitLab CI/CD, GitLab provides a variable named $CI_JOB_TOKEN which can be used for uploading Docker images to the registry during job execution. This token expires after the job is done. It could be combined with helm install --wait, but when a rescheduling takes place to a new node which does not have the image yet, the token is expired and the node is not able to download the image anymore. Therefore, this only works right in the moment of deploying the app.
Creating an ImagePullSecret manually and add it to the Deployment or the default ServiceAccount: *This is a manual step, has to be repeated for each individual project and just sucks - we're trying to automate things/GitLab managed Kubernetes clusters is designed for avoiding any manual step.`
Something else but I don't know about it.
So, am I wrong in one of these points? Am I missing a eligible option in this listing?
Again: It's all about a seamless integration with the "Managed Cluster" features of GitLab. I know how to add tokens from GitLab as ImagePullSecrets in Kubernetes, but I want to know how to automate this with the Managed Cluster feature.
There is another way. You can bake the ImagePullSecret in your container runtime configuration. Docker, containerd or CRI-O (Whatever you are using)
Docker
As root run docker login <your-private-registry-url>. Then a file /root/.docker/config.json should be created/updated. Stick that in all your Kubernetes node and make sure your kubelet runs as root (which typically does). Some background info.
The content of the file should look something like this:
{
"auths": {
"my-private-registry": {
"auth": "xxxxxx"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.2 (Linux)"
}
}
Containerd
Configure your containerd.toml file with something like this:
[plugins.cri.registry.auths]
[plugins.cri.registry.auths."https://gcr.io"]
username = ""
password = ""
auth = ""
identitytoken = ""
CRI-O
Specify the global_auth_file option in your crio.conf file.
✌️
Configure your account.
For example, for kubernetes pull image gitlab.com, use the address registry.gitlab.com:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

Airflow KubePodOperator pull image from private repository

How can Apache Airflow's KubernetesPodOperator pull docker images from a private repository?
The KubernetesPodOperator has an image_pull_secrets which you can pass a Secrets object to authenticate with the private repository. But the secrets object can only represent an environment variable, or a volume - neither of which fit my understanding of how Kubernetes uses secrets to authenticate with private repos.
Using kubectl you can create the required secret with something like
$ kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
But how can you create the authentication secret in Airflow?
There is secret object with docker-registry type according to kubernetes documentation which can be used to authenticate to private repository.
As You mentioned in Your question; You can use kubectl to create secret of docker-registry type that you can then try to pass with image_pull_secrets.
However depending on platform You are using this might have limited or no use at all according to kubernetes documentation:
Configuring Nodes to Authenticate to a Private Registry
Note: If you are running on Google Kubernetes Engine, there will already be a .dockercfg on each node with credentials for Google Container Registry. You cannot use this approach.
Note: If you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.
Note: This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.
Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported.
Making this work on mentioned platforms is possible but it would require automated scripts and third party tools.
Like in Amazon ECR example: Amazon ECR Docker Credential Helper would be needed to periodically pull AWS credentials to docker registry configuration and then have another script to update kubernetes docker-registry secrets.
As for Airflow itself I don't think it has functionality to create its own docker-repository secrets.
You can request functionality like that in Apache Airflow JIRA.
P.S.
If You still have issues with Your K8s cluster you might want to create new question on stack addressing them.

Is it possible to use Jenkins secret as an imagePullSecret in Kubernetes pod agent template in declarative pipeline

My goal is to set up jenkins agent pod in kubernetes cluster for which the docker image is needed to be pulled from a private registry. I cannot provide the credentials in source control. Is there any possible way to fetch credentials from jenkins secrets rather than providing a kubernetes secret in podSpec?
I've done this before when the images were stored in Azure Container Registry (ACR). In that case we used the "with credentials" plugin combined with the "Azure CLI" plugin to push/pull the images from ACR.
Here is a similar example, but using docker hub instead of ACR as the private registry:
https://medium.com/#gustavo.guss/jenkins-building-docker-image-and-sending-to-registry-64b84ea45ee9

how to create Docker file for Grafana tool which is in my local machine? How to integrate AAD with Grafana in kubernetes cluster?

Need to create a docker image of grafana app which is in my local machine. i need to deploy same image in azure kubernetes. i'm able integrate AAD with my local Grafana . so, need to create a image out of it.
i'm able to deploy ready made image from docker hub and run in kubernetes cluster but unable to integrate with AAD authentication
Problem 1 : Need docker file which will create docker image of local grafana app
problem 2 : how to integrate AAD authentication with already running GRAFANA container in Kubernetes cluster
Problem 3 : we override Defaults.ini values by ENV varaibles. Can we add ENV variables from GRAFANA UI ?
need solution for either of problems.
followed the article (https://github.com/PlagueHO/Workshop-AKS#prerequisite-knowledge)
this article helped me to Create a service, deployment and ConfigMap in kubernetes cluster.
to Integrate AAD just we need to add all clientid, secret, token url, Auth Url etc in Configmap (grafana.ini)
and need to restart the POD in cluster
or
download the code from (https://github.com/grafana/grafana ) and change the AAD values in ~/conf/defaults.ini
useful articles:
https://grafana.com/docs/auth/generic-oauth/