Pod cannot pull image from private docker registry - kubernetes

I am having some real trouble getting my pods to pull images from a private docker registry that I have setup and am able to authenticate to (I can do docker login https://my.website.com/ and I get Login Succeeded without having to put in my username:password) (I am able to run docker pull my.website.com:5000/human/forum and see all the layers being downloaded.) .
I use https://github.com/bazelbuild/rules_k8s#aliasing-eg-k8s_deploy where I specify the namespace to be "default".
I made sure to put "HTTPS://my.website.com:5000/V2/" (in lowercase) in the auth section in the docker config file before I generated the regcred secret.
Notice that I specify the imagePullSecrets below:
# deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: angular-bazel-example-prod
spec:
replicas: 1
template:
metadata:
labels:
app: angular-bazel-example-prod
spec:
containers:
- name: angular-bazel-example
image: human/forum:dev
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred # Notice
I made sure to update my certificate authority certificates:
cp /etc/docker/certs.d/my.website.com\:5000/ca.crt /usr/local/share/ca-certificates/my.website.registry.com/
sudo update-ca-certificates
I see sudo curl --user testuser:testpassword --cacert /usr/local/share/ca-certificates/my.website.registry.com/ca.crt -X GET https://mywebsite.com:5000/v2/_catalog
> {"repositories":["human/forum"]}
I see sudo curl --user testuser:testpassword --cacert /usr/local/share/ca-certificates/mywebsite.registry.com/ca.crt -X GET https://mywebsite.com:5000/v2/human/forum/tags/list
> {"name":"a/repository","tags":["dev"]}
There must be a way to troubleshoot this but I don't know how.
One thing I am curious about is
kubectl describe pod my-first-pod...
...
Volumes:
default-token-mtz9g:
Type: Secret
Where can I find this volume? I can't kubectl exec into a container because none is running.. because the pod can't pull the image.
Do you have any ideas on how I could troubleshoot this?
Thank you!
Slackware

Create a kubernetes secret to access the custom repository. One way is to provide the server, user and password manually. Documentation link.
kubectl create secret docker-registry $YOUR_REGISTRY_NAME --docker-server=https://$YOUR_SERVER_DNS/v2/ --docker-username=$YOUR_USER --docker-password=$YOUR_PASSWORD --docker-email=whatever#gmail.com --namespace=default
Then use it in your yaml
...
imagePullSecrets:
- name: $YOUR_REGISTRY_NAME
Regarding the default-token that you see mounted, it belongs to the service account that your pod uses to talk to the kubernetes api. You can find the service account by running kubectl get sa and kubectl describe sa $DEFAULT_SERVICE_ACCOUNT_ID. Find the token by running kubectl get secrets and kubectl describe secret $SECRET_ID. To clarify this service account and token have nothing to do with the docker registry unless you specify it. To include the registry in the service account follow this guide link

Related

Unable to access minikube IP address

I am an absolute beginner to Kubernetes, and I was following this tutorial to get started. I have managed writing the yaml files. However once I deploy it, I am not able to access the web app.
This is my webapp yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
apiVersion: v1
kind: Service
metadata:
name: webapp-servicel
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30200
When I run the command : kubectl get node
When I run the command: kubectl get pods, i can see the pods running
kubectl get svc
I then checked the logs for webapp, I dont see any errors
I then checked the details logs by running the command: kubectl describe pod podname
I dont see any obvious errors in the result above, but again I am not experienced enough to check if there is any config thats not set properly.
Other things I have done as troubleshooting
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and all relevant folders, and run everything again.
pinged the ip address directly from command line, and cannot reach.
I would appreciate if someone can help me fix this.
Try these 3 options
can you do the kubectl get node -o wide and get the ip address of node and then open in web browser NODE_IP_ADDRESS:30200
Alternative you can run this command minikube service <SERVICE_NAME> --url which will give you direct url to access application and access the url in web browser.
kubectl port-forward svc/<SERVICE_NAME> 3000:3000
and access application on localhost:3000
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and .kube and run everything again.
pinged the ip address directly from command line, and cannot reach.
I suggest you to try port forwarding
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
kubectl port-forward svc/x-service NodePort:Port
I got stuck here as well. After looking through some of the gitlab issues, I found a helpful tip about the minikube driver. The instructions for starting minikub are incorrect in the video if you used
minikube start -driver docker
Here's how to fix your problem.
stop minikube
minikube stop
delete minikube (this deletes your cluster)
minikube delete
start up minikube again, but this time specify the hyperkit driver
minikube start --vm-driver=hyperkit
check status
minikube status
reapply your components in this order by.
kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo.yaml
kubectl aplly -f webapp.yaml
get your ip
minikube ip
open a browser, go to ip address:30200 (or whatever the port you defined was, mine was 30100). You should see an image of a dog and a form.
Some information in this SO post is useful too.
On Windows 11 with Ubuntu 20.04 WSL, it worked for me by using:
minikube start --driver=hyperv
On Windows 10 with Docker-Desktop one can even do not need to use minikube. Just enable Kubernetes in Docker-Desktop settings and use kubectl. Check the link for further information.
Using Kubernetes of Docker-Desktop I could simply reach webapp with localhost:30100. In my case, for some reason I had to pull mongo docker image manually with docker pull mongo:5.0.

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

Can I retrieve k8s secret and use it outside of the cluster?

If I want to save credential information in K8s and then retrieve it to use out of k8s, can I do it? and how?
Yes you can, but you probably shouldn't.
When you run kubectl get secret command, what it does behind the scenes is an api call to kubernetes api server.
To access the secret outside the cluster you will need to:
Have the Kubernetes api exposed to the client(if not in same network)
Setup authentication in order to create credentials used by external clients
Call the secrets endpoint. The endpoint is something like this /api/v1/namespaces/{namespace}/secrets
As said previous, you probably shouldn't do it, there are many tools available in the market to do secret management, they would be better suited for this kind of situation.
If you are able to run a pod inside the namespace which contains the secret you can create a pod which use the secret:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
And then print the secret to stdout:
kubectl exec -it mypod -- ls /etc/foo/
kubectl exec -it mypod -- cat /etc/foo/secret.file

How to get kubernetes applications to change deploy configs

I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?
Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.
You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.
Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.

Kubernetes image pull from nexus3 docker host is failed

We have created a nexus3 docker host private registry on CentOS machine and same ip details updated on daemon.json under docker folder.
Docker pull and push is working fine.
Same image while trying to kubernetes deploy is failing with image pull state.
$ Kubectl run deployname --image=nexus3provaterepo:port/image
Before we create secret entries via command $ Kubectl create secret with same inform of user ID and password, like docker login -u userid -p passwd
Here my problem is image pull is failing from nexus3 docker host.
Please suggest me how to verify login via kubernetes command and resolve this pull image issue.
Looking yours suggestions, Thanks in advance
So when pulling from private repos you need to specify an imagePullSecret like such:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
# Specify the secret with your users credentials
imagePullSecrets:
- name: regcred
You would then use the kubectl apply -f functionality, I am not actually sure you can use this in the imperative cli version of running a deployment but all the doucmentation on this can be found at here