Not able to pull registry.k8s.io in Microk8s start - kubernetes

I'm trying to kickstart a MicroK8s cluster but the Calico pod stays on Pending status because of a 403 error against the pull of registry.k8s.io/pause:3.7
This is the error:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.7": failed to pull image "registry.k8s.io/pause:3.7": failed to pull and unpack image "registry.k8s.io/pause:3.7": failed to resolve reference "registry.k8s.io/pause:3.7": pulling from host registry.k8s.io failed with status code [manifests 3.7]: 403 Forbidden
We're talking about a new server which might be missing some configuration.
The insecure registry, according to Microk8s documentation, is enabled at localhost:3200.
I've enabled the dns on the Microk8s but nothing has change.
If I try to pull from docker I get a forbidden error.

Related

k8s access denied when pulling local images

I use podman with a local registry. I am able to pull the images from the command line and also see the manifest. When I deploy my k8s it fails to pull the image with error access denied. Any idea's? I Googled for days now but do not get an answer that works.
I run Ubuntu 22.04 on VMWARE if that maybe makes a difference. Thank you.
kubelet Failed to pull image "localhost:5001/datawire/aes:3.1.0": rpc error: code = Unknown desc = failed to pull and unpack image "localhost:5001/datawire/aes:3.1.0": failed to resolve reference "localhost:5001/datawire/aes:3.1.0": failed to do request: Head "http://localhost:5001/v2/datawire/aes/manifests/3.1.0": dial tcp 127.0.0.1:5001: connect: connection refused
When Kubernetes attempts to create a new pod but is unable to pull the required container image, an error message stating "Failed to pull image" will be displayed. When you try to add a new resource to your cluster with a command like "kubectl apply," you should see this right away. When you inspect the pod with "kubectl describe pod/my-pod," the error will appear in the Events.
Pull errors are caused by the nodes in your cluster. Each node's Kubelet worker process is in charge of obtaining the images required to process a pod scheduling request. When a node is unable to download an image, the status is reported to the cluster control plane.
Images may not pull for a variety of reasons. It's possible that your nodes' networking failed, or that the cluster as a whole is experiencing connectivity issues. If you are online, the registry is up, and pull errors continue to occur, your firewall or traffic filtering may be malfunctioning.
Refer this doc1 and doc2 for more information.

kubernetes issue : Init:ErrImagePull

I have following critical issue on kubernetes.
From couple of days I'm fighting with images from docker issues.
I have one server with docker-ce and on another master-server on kubernetes.
I don't understant this issue because everything was working fine and last time I have issues wirth images on docker.
Images on docker are there, and from kubernetes I see below errors:
Failed to pull image "busybox": rpc error: code = Unknown desc = Error
response from daemon: Get https://registry-1.docker.io/v2/: net/http:
TLS handshake timeout
and also issues with get image from docker: Error: ErrImagePull
Back-off pulling image "busybox" Error: ImagePullBackOff
Can someone help me with that please ?

Failed to pull image "velero/velero-plugin-for-gcp:v1.1.0" while installing Velero in GKE Cluster

I'm trying to install and configure Velero for kubernetes backup. I have followed the link to configure it in my GKE cluster. The installation went fine, but velero is not working.
I am using google cloud shell for running all my commands (I have installed and configured velero client in my google cloud shell)
On further inspection on velero deployment and velero pods, I found out that it is not able to pull the image from the docker repository.
kubectl get pods -n velero
NAME READY STATUS RESTARTS AGE
velero-5489b955f6-kqb7z 0/1 Init:ErrImagePull 0 20s
Error from velero pod (kubectl describe pod) (output redacted for readability - only relevant info shown below)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38s default-scheduler Successfully assigned velero/velero-5489b955f6-kqb7z to gke-gke-cluster1-default-pool-a354fba3-8674
Warning Failed 22s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Failed to pull image "velero/velero-plugin-for-gcp:v1.1.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 22s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Error: ErrImagePull
Normal BackOff 21s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Back-off pulling image "velero/velero-plugin-for-gcp:v1.1.0"
Warning Failed 21s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Error: ImagePullBackOff
Normal Pulling 8s (x2 over 37s) kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Pulling image "velero/velero-plugin-for-gcp:v1.1.0"
Command used to install velero: (some of the values are given as variables)
velero install \
--provider gcp \
--plugins velero/velero-plugin-for-gcp:v1.1.0 \
--bucket $storagebucket \
--secret-file ~/velero-backup-storage-sa-key.json
Velero Version
velero version
Client:
Version: v1.4.2
Git commit: 56a08a4d695d893f0863f697c2f926e27d70c0c5
<error getting server version: timed out waiting for server status request to be processed>
GKE version
v1.15.12-gke.2
Isn't this a Private Cluster ? – mario 31 mins ago
#mario this is a private cluster but I can deploy other services without any issues (for eg: I have deployed nginx successfully) –
Sreesan 15 mins ago
Well, this is a know limitation of GKE Private Clusters. As you can read in the documentation:
Can't pull image from public Docker Hub
Symptoms
A Pod running in your cluster displays a warning in kubectl describe such as Failed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Potential causes
Nodes in a private cluster do not have outbound access to the public
internet. They have limited access to Google APIs and services,
including Container Registry.
Resolution
You cannot fetch images directly from Docker Hub. Instead, use images
hosted on Container Registry. Note that while Container Registry's
Docker Hub
mirror
is accessible from a private cluster, it should not be exclusively
relied upon. The mirror is only a cache, so images are periodically
removed, and a private cluster is not able to fall back to Docker Hub.
You can also compare it with this answer.
It can be easily verified on your own by making a simple experiment. Try to run two different nginx deployments. First based on image nginx (which equals to nginx:latest) and the second one based on nginx:1.14.2.
While the first scenario is perfectly feasible because the nginx:latest image can be pulled from Container Registry's Docker Hub mirror which is accessible from a private cluster, any attempt of pulling nginx:1.14.2 will fail which you'll see in Pod events. It happens because the kubelet is not able to find this version of the image in GCR and it tries to pull it from public docker registry (https://registry-1.docker.io/v2/), which in Private Clusters is not possible. "The mirror is only a cache, so images are periodically removed, and a private cluster is not able to fall back to Docker Hub." - as you can read in docs.
If you still have doubts, just ssh into your node and try to run following commands:
curl https://cloud.google.com/container-registry/
curl https://registry-1.docker.io/v2/
While the first one works perfectly, the second one will eventually fail:
curl: (7) Failed to connect to registry-1.docker.io port 443: Connection timed out
Reason ? - "Nodes in a private cluster do not have outbound access to the public internet."
Solution ?
You can search what is currently available in GCR here.
In many cases you should be able to get the required image if you don't specify it's exact version (by default latest tag is used). While it can help with nginx, unfortunatelly no version of velero/velero-plugin-for-gcp is currently available in Google Container Registry's Docker Hub mirror.
Granting private nodes outbound internet access by using Cloud NAT seems the only reasonable solution that can be applied in your case.
I solved this problem by realizing that version of:
velero/velero-plugin-for-gcp
is not following the version of:
velero/velero
For example, now latest versions are:
velero/velero:v1.9.1 and velero/velero-plugin-for-gcp:v1.5.0

fluentd daemon set container for papertrail failing to start in kubernetes cluster

Am trying to setup fluentd in kubernetes cluster to aggregate logs in papertrail, as per the documentation provided here.
The configuration file is fluentd-daemonset-papertrail.yaml
It basically creates a daemon set for fluentd container and a config map for fluentd configuration.
When I apply the configuration, the pod is assigned to a node and the container is created. However, its either not completing the initialization or the pod gets killed immediately after it is started.
As the pods are getting killed, am loosing the logs too. Couldn't investigate the cause of the issue.
Looking through the events for kube-system namespace has below errors,
Error: failed to start container "fluentd": Error response from daemon: OCI runtime create failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/75026/ns/ipc\" caused \"lstat /proc/75026/ns/ipc: no such file or directory\"": unknown
Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9559643bf77e29d270c23bddbb17a9480ff126b0b6be10ba480b558a0733161c" network for pod "fluentd-papertrail-b9t5b": NetworkPlugin kubenet failed to set up pod "fluentd-papertrail-b9t5b_kube-system" network: Error adding container to network: failed to open netns "/proc/111610/ns/net": failed to Statfs "/proc/111610/ns/net": no such file or directory
Am not sure whats causing these errors. Appreciate any help to understand and troubleshoot these errors.
Also, is it possible to look at logs/events that could tell us why a pod is given a terminate signal?
Please ensure that /etc/cni/net.d and its /opt/cni/bin friend both exist and are correctly populated with the CNI configuration files and binaries on all Nodes.
Take a look: sandbox.
With help from papertrail support team, I was able to resolve the issue by removing below entry from manifest file.
kubernetes.io/cluster-service: "true"
Above annotation seems to have been deprecated.
Relevant github issues:
https://github.com/fluent/fluentd-kubernetes-daemonset/issues/296
https://github.com/kubernetes/kubernetes/issues/72757

kubeadm init is getting failed

I was trying to setup a Kubernetes cluster in centos 7 and i am facing issue while running the below kubeadm init command.
kubeadm init --apiserver-advertise-address=10.70.6.18 --pod-network-cidr=192.168.0.0/16
I1224 18:20:55.388552 11136 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on 10.171.221.11:53: no such host
I1224 18:20:55.388679 11136 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.1: output: v1.13.1: Pulling from kube-apiserver
73e3e9d78c61: Pulling fs layer
e08dba503a39: Pulling fs layer
error pulling image configuration: Get https://storage.googleapis.com/asia.artifacts.google-containers.appspot.com/containers/images/sha256:40a63db91ef844887af73e723e40e595e4aa651ac2c5637332d719e42abc4dd2: x509: certificate signed by unknown authority
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.1: output: v1.13.1: Pulling from kube-controller-manager
In your worker(s) node try using proxy or VPN to change your IP. I think the registry which you try to pull from it, blocked your IP.