We are having kubernetes cluster which is running on-premise & we having GCR private repository. So how we can access that private repository to my on-premise kubernetes cluster, As I know we can do using gcloud-sdk but it won't be possible to install gcloud-sdk on every node of kubernetes cluster.
We used to deploy pods on azure AKS cluster and images used to be from GCR.
these are the steps we follow.
Create a service account in gcloud with permissions to gcr.
Create keys for the service account.
Add kubectl secret.
Use secret in yaml
gcloud iam service-accounts keys create gcr-docker-cred.json --iam-account=service-account-name#project-id.iam.gserviceaccount.com
Add kubectl secret.
kubectl create secret docker-registry gcriosecret --docker-server=https://gcr.io --docker-username=_json_key --docker-email=user#example.com --docker-password="$(cat gcr-docker-cred.json)"
Use secret in yaml
imagePullSecrets:
- name: gcriosecret
this blog might be a good help
Kubernetes clusters running on GKE or GCE have native support for accessing the container registry and need no further configuration.
As you mentioned that you are running an on premises cluster you are not running any of these and only use the container registry from GCP, so, while I haven't had the chance to test this (I don't have a cluster outside Google Cloud) the process shouldn't be different than the process for pulling an image from a private registry.
In your case you can create a secret with the auth credentials for the gcr.io registry like this:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
your-registry-server in this case will probably be https://gcr.io/[your-project-id]
When you have created the secret named regcred you can configure pods to use it for pulling the desired image from the registry adding an imagePullSecrets like this :
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: [The image you want to pull]
imagePullSecrets:
- name: regcred
Then you can test if the image is correctly pulled by deploying this pod:
kubectl create -f [your pod yaml]
Waiting for the pod to be created and then describing the pod with kubectl describe pod private-reg and seeing an event sequence similar to:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned private-reg to gke-cluster-22-default-pool-e7830b6c-pxmt
Normal Pulling 4m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt pulling image "gcr.io/XXX/XXX:latest"
Normal Pulled 3m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt Successfully pulled image ""gcr.io/XXX/XXX:latest"
Normal Created 3m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt Created container
Normal Started 3m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt Started container
Related
I have deployed EFS file system in AWS EKS cluster after the deployment my storage pod is up and running.
kubectl get pod -n storage
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-968445d79-g8wjr 1/1 Running 0 136m
When I'm trying deploy application pod is not not coming up its pending state 0/1 at the same time PVC is not bounded its pending state.
Here are the logs for after the actual application deployment.
I0610 13:26:11.875109 1 controller.go:987] provision "default/logs" class "efs": started
E0610 13:26:11.894816 1 controller.go:1004] provision "default/logs" class "efs": unexpected error getting claim reference: selfLink was empty, can't make reference
I'm using k8 version 1.20 could you please some one help me on this.
Kubernetes 1.20 stopped propagating selfLink.
There is a workaround available, but it does not always work.
After the lines
spec:
containers:
- command:
- kube-apiserver
add
- --feature-gates=RemoveSelfLink=false
then reapply API server configuration
kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
This workaround will not work after version 1.20 (1.21 and up), as selfLink will be completely removed.
Another solution is to use newer NFS provisioner image:
gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0
I am dealing with some issues on Kubernetes on Azure (AKS) using Autoscaler and secrets for pulling images from Docker Hub.
I created the secret in my applications namespace while having 3 nodes enabled (initial cluster status).
kubectl create secret docker-registry mysecret --docker-server=https://index.docker.io/v1/ --docker-username=<docker_id> --docker-password=<docker_password> -n mynamespace
I deploy my application using the imagePullSecrets option after specifying images URL.
imagePullSecrets:
- name: mysecret
After deploying the application I created the autoscaler rule.
kubectl autoscale deployment mydeployment --cpu-percent=50 --min=1 --max=20 -n mynamespace
All new pods pull the image correctly. However at some point when new Kubernetes node is being automatically deployed, all new pods requiring the DockerHub based image can not start.
Failed to pull image "mydocherhubaccount/myimage:mytag": rpc error: code = Unknown desc = Error response from daemon: pull access denied for mydocherhubaccount/myimage:mytag, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Is there anything I am missing here? I waited for 15 minutes and recreated pods but it did not help.
I use Kubernetes 1.15.5 on Azure AKS. The cluster was created using the following command.
az aks create -g myresourcegroup -n mynamespace --location eastus --kubernetes-version 1.15.5 --node-count 3 --node-osdisk-size 100 --node-vm-size Standard_D4_v3 --enable-vmss --enable-cluster-autoscaler --min-count 3 --max-count 5
I appreciate any help provided. It really got me stuck here.
I have set up AKS cluster with 14 Linux nodes. I am deploying code using helm charts. The pods with manual storageClass get created successfully but the ones that use default storageClass fail to create a persistent volume claim with the error.
Warning ProvisioningFailed (x894 over 33h)
persistentvolume-controller Failed to provision volume with
StorageClass "default": azureDisk - claim.Spec.Selector is not
supported for dynamic provisioning on Azure disk
I tried creating an NFS storage and add that to the kubernetes cluster using kubectl command but the pods are not using the that NFS mount for volume creation.
kubectl describe pvc dev-aaf-aaf-sms -n onap
Name: dev-aaf-aaf-sms
Namespace: onap
StorageClass: default
Status: Pending
Volume:
Labels: app=aaf-sms
chart=aaf-sms-4.0.0
heritage=Tiller
release=dev-aaf
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: dev-aaf-aaf-sms-6bbffff5db-qxm7j
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed <invalid> (x894 over 33h) persistentvolume-controller Failed to provision volume with StorageClass "default": azureDisk - claim.Spec.Selector is not supported for dynamic provisioning on Azure disk
Can someone with Azure AKS or Kubernetes understanding provide some guidance here.
Q: Is it possible to setup a default NFS volume mount for all nodes on an AKS cluster using kubectl?
It appears to be a compatibility constraint between Azure and Kubernetes for “default” storageClass.
For PV with “manual” storageClass, PVC gets dynamically created successfully. So we need to define the default storageClass for nodes on the AKS cluster. In my case I need to define it as an NFS mount.
I know how to do it on an individual VM after installing kubernetes on it but I am struggling to set it for all nodes of an AKS cluster. Azure documentation only talks about doing it at pod level and not node level
you are clearly "hitting" this piece of code which implies that you cannot have selector on your PVC.spec
When starting up a Kubernetes cluster, I load etcd plus the core kubernetes processes - kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler - as static pods from a private registry. This has worked in the past by ensuring that the $HOME environment variable is set to "/root" for kubelet, and then having /root/.docker/config.json defined with the credentials for the private docker registry.
When attempting to run Kubernetes 1.6, with CRI enabled, I get errors in the kubelet log saying it cannot pull the pause:3.0 container from my private docker registry due to no authentication.
Setting --enable-cri=false on the kubelet command line works, but when CRI is enabled, it doesn't seem to use the /root/.docker/config file for authentication.
Is there some new way to provide the docker credentials needed to load static pods when running with CRI enabled?
In 1.6, I managed to make it work with the following recipe in https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
You need to specify newly created myregistrykey as the credential under imagePullSecrets field in the pod spec.
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
It turns out that there is a deficiency in the CRI capabilities in Kubernetes 1.6. With CRI, the "pause" container - now called the "Pod Sandbox Image" is treated as a special case - because it is an "implementation detail" of the container runtime whether or not you even need it. In the 1.6 release, the credentials applied for other containers, from /root/.docker/config.json, for example, are not used when trying to pull the Pod Sandbox Image.
Thus, if you are trying to pull this image from a private registry, the CRI logic doesn't associate the credentials with the pull request. There is now a Kubernetes issue (#45738) to address this, targeted for 1.7.
In the meantime, an easy workaround is to pre-pull the "Pause" container into the node's local docker image cache before starting up the kubelet process.
Kubernetes has master and minion nodes.
Will (can) Kubernetes run specified Docker containers on the master node(s)?
I guess another way of saying it is: can a master also be a minion?
Thanks for any assistance.
Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster.
A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine.
This is going to change significantly when Issue #6087 is implemented.
You need to taint your master node to run containers on it, although not recommended.
Run this on your master node:
kubectl taint nodes --all node-role.kubernetes.io/master-
Courtesy of Alex Ellis' blog post here.
You can try this code:
kubectl label node [name_of_node] node-short-name=node-1
Create yaml file (first.yaml)
apiVersion: v1
kind: Pod
metadata:
name: nginxtest
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-short-name: node-1
Create a pod
kubectl create –f first.yaml