minikube can't access remote Harbor registry - kubernetes

Even start minikube using minikube start --insecure-registry "<HARBOR_HOST_IP>", when tried to run a deployment yaml file which include image path like <HARBOR_HOST_IP>/app/server, got error:
Failed to pull image "[HARBOR_IP]/app/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get https://[HARBOR_IP]/v1/_ping: dial tcp [HARBOR_IP]:443: getsockopt: connection refused"}
Error syncing pod
How to set insecure-registry correctly in minikube?
Edit
Tag current docker image with 80 port:
docker tag server <HARBOR_HOST_IP>:80/app/server
Push it to Harbor registry server:
docker push <HARBOR_HOST_IP>:80/app/server
Unfortunately remote Harbor host denied:
The push refers to a repository [<HARBOR_HOST_IP>:80/app/server]
00491a929c2e: Preparing
ec4cc3fab4be: Preparing
e7d3ac95d998: Preparing
8bb050c3d78d: Preparing
4aa9e88e4148: Preparing
978b58726b5e: Waiting
2b0fb280b60d: Waiting
denied: requested access to the resource is denied
Even added <HARBOR_HOST_IP>:80 to local insecure-registries list.

It is working if you always define port 80 when you communicate with your docker registry which works on port 80.
Build an image:
docker build -t <REGISTRY_IP>:80/<name> <path>
Push it to registry:
docker push <REGISTRY_IP>:80/<name>
Start minikube with this insecure registry:
minikube start --insecure-registry <REGISTRY_IP>:80
Create deployment:
kubectl create -f test.yaml
where test.yaml is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
template:
metadata:
labels:
app: test
spec:
containers:
- image: 192.168.1.11:80/<name>
name: test
imagePullPolicy: Always

Have you added secret in kubernetes?
Please refer the link to add secret: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then use the secret in kubernetes yaml.
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred

Related

Kubernetes: Cannot deploy a simple "Couchbase" service

I am new to Kubernetes I am trying to mimic a behavior a bit like what I do with docker-compose when I serve a Couchbase database in a docker container.
couchbase:
image: couchbase
volumes:
- ./couchbase:/opt/couchbase/var
ports:
- 8091-8096:8091-8096
- 11210-11211:11210-11211
I managed to create a cluster in my localhost using a tool called "kind"
kind create cluster --name my-cluster
kubectl config use-context my-cluster
Then I am trying to use that cluster to deploy a Couchbase service
I created a file named couchbase.yaml with the following content (I am trying to mimic what I do with my docker-compose file).
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchbase
namespace: my-project
labels:
platform: couchbase
spec:
replicas: 1
selector:
matchLabels:
platform: couchbase
template:
metadata:
labels:
platform: couchbase
spec:
volumes:
- name: couchbase-data
hostPath:
# directory location on host
path: /home/me/my-project/couchbase
# this field is optional
type: Directory
containers:
- name: couchbase
image: couchbase
volumeMounts:
- mountPath: /opt/couchbase/var
name: couchbase-data
Then I start the deployment like this:
kubectl create namespace my-project
kubectl apply -f couchbase.yaml
kubectl expose deployment -n my-project couchbase --type=LoadBalancer --port=8091
However my deployment never actually start:
kubectl get deployments -n my-project couchbase
NAME READY UP-TO-DATE AVAILABLE AGE
couchbase 0/1 1 0 6m14s
And when I look for the logs I see this:
kubectl logs -n my-project -lplatform=couchbase --all-containers=true
Error from server (BadRequest): container "couchbase" in pod "couchbase-589f7fc4c7-th2r2" is waiting to start: ContainerCreating
As OP mentioned in a comment, issue was solved using extra mount as explained in documentation: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
Here is OP's comment but formated so it's more readable:
the error shows up when I run this command:
kubectl describe pods -n my-project couchbase
I could fix it by creating a new kind cluster:
kind create cluster --config cluster.yaml
Passing this content in cluster.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: inf
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/me/my-project/couchbase
containerPath: /couchbase
In couchbase.yaml the path becomes path: /couchbase of course.

Error in deploy stage: "lchmod (file attributes) error: Not supported"

I am attempting to deploy an image "casbin-role-backend" to cloud, but it always failed.
The following is found from log:
Preparing to start the job...
Pipeline image: latest
Preparing the build artifacts...
lchmod (file attributes) error: Not supported
.....
DEPLOYING using manifest
+++ kubectl apply --namespace default -f ./tmp.deployment.yaml
deployment.apps/casbin-role-backend unchanged
The Service "casbin-role-backend" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
+++ set +x
CHECKING deployment rollout of casbin-role-backend
+++ kubectl rollout status deploy/casbin-role-backend --watch=true --timeout=150s --namespace default
error: deployment "casbin-role-backend" exceeded its progress deadline
+++ STATUS=fail
+++ set +x
SHOWING last events
LAST SEEN TYPE REASON OBJECT MESSAGE
41m Warning Failed pod/casbin-role-mgt-ui-7d59b6d4cf-2pbhm Error: InvalidImageName
2m11s Warning InspectFailed pod/casbin-role-backend-68d76464dd-vbvch Failed to apply default image tag "//:": couldn't parse image reference "//:": invalid reference format
...
DEPLOYMENT FAILED
....
OK
Finished: FAILED
And below is my deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: casbin-role-backend
labels:
app: app
spec:
type: NodePort
ports:
- port: 3000
name: http
nodePort: 30080
selector:
app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: casbin-role-backend
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: casbin-role-backend
image: xxx/casbin-role-backend
ports:
- containerPort: 3000
Does anybody know what error is it? I had searched it for some time but still cannot find what is it and how to fix.
Update:
The source code is originated from below, and I added Dockerfile and deployment.yaml to deploy it on k8s.
https://github.com/alikhan866/Casbin-Role-Mgt-Dashboard-RBAC
Dockerfile source:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /dist
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
# add app
COPY . ./
# start app
CMD ["npm", "run dev"]
I see two issues here:
1.
The Service "casbin-role-backend" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
It means that the port used by the nodePort service is already in use. You can list these services with: kubectl get svc --all-namespaces | grep '30080' and change the port value or delete the service. Also, make sure that you specify the proper namespace.
2.
2m11s Warning InspectFailed pod/casbin-role-backend-68d76464dd-vbvch Failed to apply default image tag "//:": couldn't parse image reference "//:": invalid reference format`
My educated guess here is that your image name is invalid because it starts with https:// or ://. A proper image name should look like this:
image: repository:organization_name/image_name:image_version

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

How do I expose a registry hosted in Minikube?

I have started a Kubernetes cluster with Minikube. I used a simple deployment file to create a deployment that runs the Registry container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
replicas: 3
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 80
After this, I expose the deployment using a service:
$ kubectl expose deployment/registry --type="LoadBalancer" --port 5000
$ minikube service registry
This exposes my registry to the host machine. I can navigate to http://172.17.174.88:31826/v2/_catalog in my browser and see there's no repositories yet. I have an image running an ASP.Net WebApi project called WeatherApp on my host machine's docker. I run these commands:
$ docker tag 0a259f7ce186 172.17.174.88:31412/weatherapp
$ docker push 172.17.174.88:31412/weatherapp
This causes the following error:
The push refers to repository [172.17.174.88:31412/weatherapp] Get
https://172.17.174.88:31412/v2/: dial tcp 172.17.174.88:31412:
connect: no route to host
I think the problem is that my docker client is trying to connect to the registry over HTTPS, which will not work. How can I force my docker client to use HTTP to push the image to my registry?
I‘m afraid that you will have no chance to fall back to just http. You are forced to use https. You need to configure insecure registries in your docker client.
This might help: https://docs.docker.com/registry/insecure/

Private repository passing through kubernetes yaml file

We have tried to setup hivemq manifest file. We have hivemq docker image in our private repository
Step1: I have logged into the private repository like
docker login "private repo name"
It was success
After that I have tried to create manifest file for that like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hivemq
spec:
replicas: 1
template:
metadata:
labels:
name: hivemq1
spec:
containers:
- env:
xxxxx some envronment values I have passed
name: hivemq
image: privatereponame:portnumber/directoryname/hivemq:
ports:
- containerPort: 1883
Its successfully creating, but I am getting the below issues. Could you please help any one to solve this issue.
hivemq-4236597916-mkxr4 0/1 ImagePullBackOff 0 1h
Logs:
Error from server (BadRequest): container "hivemq16" in pod "hivemq16-1341290525-qtkhb" is waiting to start: InvalidImageName
Some times I am getting that kind of issues
Error from server (BadRequest): container "hivemq" in pod "hivemq-4236597916-mkxr4" is waiting to start: trying and failing to pull image
In order to use a private docker registry with Kubernetes it's not enough to docker login.
You need to add a Kubernetes docker-registry Secret with your credentials as described here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/. Also in that article is imagePullSecrets setting you have to add to your yaml deployment file, referencing that secret.
I just fixed this on my machine, kubectl v1.9.0 failed to create the secret properly. Upgrading to v1.9.1, deleting the secret, recreating it resolved the issue for me. https://github.com/kubernetes/kubernetes/issues/57427