Migrate helm chart from public repository to private - kubernetes

I'm trying to deploy some services using helm chart on AWS EKS.
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install postgresql --version 8.6.4 bitnami/postgresql
After I run helm command, pod is not created because connection to docker.io is blocked in the EKS.
$ kubectl describe po
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned airflow/postgresql-postgresql-0 to xxxxxx.compute.internal
Normal SuccessfulAttachVolume 12m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-e3e438ef-50a4-4c19-a788-3d3755b89bae"
Normal Pulling 9m50s (x4 over 12m) kubelet Pulling image "docker.io/bitnami/postgresql:11.7.0-debian-10-r26"
Warning Failed 9m35s (x4 over 11m) kubelet Failed to pull image "docker.io/bitnami/postgresql:11.7.0-debian-10-r26": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 9m35s (x4 over 11m) kubelet Error: ErrImagePull
Warning Failed 9m23s (x6 over 11m) kubelet Error: ImagePullBackOff
Normal BackOff 113s (x36 over 11m) kubelet Back-off pulling image "docker.io/bitnami/postgresql:11.7.0-debian-10-r26"
How can I change repository domain from docker.io to my ECR repository?
I want to push this chart to my ECR repository.
I want to make chart to pull the image not from docker.io but from my ECR repository.

How about try to override the repository domain value in values.yaml of the bitnami/postgresql using --set. Refer here for more details.
image:
registry: docker.io
repository: bitnami/postgresql
tag: 11.10.0-debian-10-r24
You may override above image.registry value as follows.
$ helm install postgresql \
--set image=your-registry/bitnami/postgresql:11.10.0-debian-10-r24 \
--version 8.6.4 bitnami/postgresql

Related

istio bookinfo sample application deployment failed for "ImagePullBackOff"

I am trying to deploy istio's sample bookinfo application using the below command:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
from here
but each time I am getting ImagePullBackoff error like this:
NAME READY STATUS RESTARTS AGE
details-v1-c74755ddf-m878f 2/2 Running 0 6m32s
productpage-v1-778ddd95c6-pdqsk 2/2 Running 0 6m32s
ratings-v1-5564969465-956bq 2/2 Running 0 6m32s
reviews-v1-56f6655686-j7lb6 1/2 ImagePullBackOff 0 6m32s
reviews-v2-6b977f8ff5-55tgm 1/2 ImagePullBackOff 0 6m32s
reviews-v3-776b979464-9v7x5 1/2 ImagePullBackOff 0 6m32s
For error details, I have run :
kubectl describe pod reviews-v1-56f6655686-j7lb6
Which returns these:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m41s default-scheduler Successfully assigned default/reviews-v1-56f6655686-j7lb6 to minikube
Normal Pulled 7m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 7m39s kubelet Created container istio-init
Normal Started 7m39s kubelet Started container istio-init
Warning Failed 5m39s kubelet Failed to pull image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 5m39s kubelet Error: ErrImagePull
Normal Pulled 5m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 5m39s kubelet Created container istio-proxy
Normal Started 5m39s kubelet Started container istio-proxy
Normal BackOff 5m36s (x3 over 5m38s) kubelet Back-off pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
Warning Failed 5m36s (x3 over 5m38s) kubelet Error: ImagePullBackOff
Normal Pulling 5m25s (x2 over 7m38s) kubelet Pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
Do I need to build dockerfile first and push it to the local repository? There are no clear instructions there or I failed to find any.
Can anybody help?
If you check in dockerhub the image is there:
https://hub.docker.com/r/istio/examples-bookinfo-reviews-v1/tags
So the error that you need to deal with is context deadline exceeded while trying to pull it from dockerhub. This is likely a networking error (a generic Go error saying it took too long), depending on where your cluster is running you can do manually a docker pull from the nodes and that should work.
EDIT: for minikube do a minikube ssh and then a docker pull
Solved the problem by below command :
minikube ssh docker pull istio/examples-bookinfo-reviews-v1:1.17.0
from this git issues here
Also How to use local docker images with Minikube?
Hope this may help somebody

kubectl ImagePullBackOff due to secret

I'm creating kubevirt in minikube, initially kubevirt-operator.yaml fails with ImagePullBackOff. After I added secret in the yaml
imagePullSecrets:
- name: regcred
containers:
all my virt-operator* started to run. virt-api* pods still shows ImagePullBackOff.
The error comes out as
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned kubevirt/virt-api-787487d9cd-t68qf to minikube
Normal Pulling 25m (x4 over 27m) kubelet Pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Warning Failed 25m (x4 over 27m) kubelet Failed to pull image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for us-ashburn-1.ocir.io/xxx/virt-api, repository does not exist or may require 'docker login': denied: Anonymous users are only allowed read access on public repos
Warning Failed 25m (x4 over 27m) kubelet Error: ErrImagePull
Warning Failed 25m (x6 over 27m) kubelet Error: ImagePullBackOff
Normal BackOff 2m26s (x106 over 27m) kubelet Back-off pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Manually, I can pull the same image with docker login. Any help would be much appreciated. Thanks
This docker image looks like it is in a private registry(and from oracle). And I assume the regcred is not correct. Can you login there with docker login? if so you can create regcred secret like this
$ kubectl create secret docker-registry regcred --docker-server=<region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-username>' --docker-password='<oci-auth-token>' --docker-email='<email-address>'
Also check this oracle tutorial: https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-and-registry/index.html
Here you can find the steps to implement secret values to the cluster.
If you are using a private registry, check that your secret exists and the secret is correct. Your secret should also be in the same namespace.
Your Minikube is a VM not your localhost. You try this
Open Terminal
eval $(minikube docker-env)
docker build .
kubectl create -f deployment.yaml
just valid this terminal. if closed terminal again open terminal and write eval $(minikube docker-env)
eval $(minikube docker-env) this code build image in Minikube
Also, try to login docker on all nodes by using docker login.
There is also a lengthy blog post describing how to debug image pull back-off in depth here
Hey if you look here. I think you can find some helpful documentation.
What they are doing is, they are upload the dockerconfig file which has login credentials as a secret and then referring to that in the deployment.
You could try to follow these steps and do something similar. Let me know if it works

Skaffold and Gitlab image repositories

This is my Skaffold config.
apiVersion: skaffold/v2beta19
kind: Config
metadata:
name: my-api
build:
local:
useDockerCLI: true
artifacts:
- image: registry.gitlab.com/lodiee/my-api
docker:
buildArgs:
DATABASE_HOST: '{{.DATABASE_HOST}}'
deploy:
helm:
releases:
- name: my-api
artifactOverrides:
image: registry.gitlab.com/lodiee/my-api
imageStrategy:
helm: {}
remoteChart: bitnami/node
setValueTemplates:
image.tag: '{{.DIGEST_HEX}}'
setValues:
image.repository: registry.gitlab.com/lodiee/my-api
When I run Skaffold dev, everything works fine until I see an ImagePullBackoff error in pod logs. Here is a partial output of the log:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 102s default-scheduler Successfully assigned default/my-api-node-55cf68d989-7vltw to kind-control-plane
Normal BackOff 21s (x5 over 99s) kubelet Back-off pulling image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c"
Warning Failed 21s (x5 over 99s) kubelet Error: ImagePullBackOff
Normal Pulling 6s (x4 over 101s) kubelet Pulling image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c"
Warning Failed 4s (x4 over 100s) kubelet Failed to pull image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c": failed to resolve reference "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
As you can see from the output, kubernetes adds docker.io prefix in front of my image repo address and so it can't find the image at that address.
Is this the normal behavior? Also this is a local development setup and I use skaffold dev to launch it, why does k8s try to go to that address?
Thank you.

helm3 installation fails with: failed post-install: timed out waiting for the condition

I'm new to Kubernetes and Helm. I have installed k3d and helm:
k3d version v1.7.0
k3s version v1.17.3-k3s1
helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
I do have a cluster created with 10 worker nodes. When I try to install stackstorm-ha on the cluster I see the following issues:
helm install stackstorm/stackstorm-ha --generate-name --debug
client.go:534: [debug] stackstorm-ha-1592860860-job-st2-apikey-load: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: failed post-install: timed out waiting for the condition
helm.go:84: [debug] failed post-install: timed out waiting for the condition
njbbmacl2813:~ gangsh9$ kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
kubectl describe pods either shows :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/stackstorm-ha-1592857897-st2api-7f6c877b9c-dtcp5 to k3d-st2hatest-worker-5
Warning Failed 23m kubelet, k3d-st2hatest-worker-5 Error: context deadline exceeded
Normal Pulling 17m (x5 over 37m) kubelet, k3d-st2hatest-worker-5 Pulling image "stackstorm/st2api:3.3dev"
Normal Pulled 17m (x5 over 28m) kubelet, k3d-st2hatest-worker-5 Successfully pulled image "stackstorm/st2api:3.3dev"
Normal Created 17m (x5 over 28m) kubelet, k3d-st2hatest-worker-5 Created container st2api
Normal Started 17m (x4 over 28m) kubelet, k3d-st2hatest-worker-5 Started container st2api
Warning BackOff 53s (x78 over 20m) kubelet, k3d-st2hatest-worker-5 Back-off restarting failed container
or
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/stackstorm-ha-1592857897-st2timersengine-c847985d6-74h5k to k3d-st2hatest-worker-2
Warning Failed 6m23s kubelet, k3d-st2hatest-worker-2 Failed to pull image "stackstorm/st2timersengine:3.3dev": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/stackstorm/st2timersengine:3.3dev": failed to resolve reference "docker.io/stackstorm/st2timersengine:3.3dev": failed to authorize: failed to fetch anonymous token: Get https://auth.docker.io/token?scope=repository%3Astackstorm%2Fst2timersengine%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
Warning Failed 6m23s kubelet, k3d-st2hatest-worker-2 Error: ErrImagePull
Normal BackOff 6m22s kubelet, k3d-st2hatest-worker-2 Back-off pulling image "stackstorm/st2timersengine:3.3dev"
Warning Failed 6m22s kubelet, k3d-st2hatest-worker-2 Error: ImagePullBackOff
Normal Pulling 6m10s (x2 over 6m37s) kubelet, k3d-st2hatest-worker-2 Pulling image "stackstorm/st2timersengine:3.3dev"
Kind of stuck here.
Any help would be greatly appreciated.
The TLS handshake timeout error is very common when the machine that you are running your deployment on is running out of resources. Alternative issue is caused by slow internet connection or some proxy settings but we ruled out that since you can pull and run docker images locally and deploy small nginx webserver in your cluster.
As you may notice in the stackstorm helm chart it installs a big amount of services/pods inside your cluster which can take up a lot of resources.
It will install 2 replicas for each component of StackStorm
microservices for redundancy, as well as backends like RabbitMQ HA,
MongoDB HA Replicaset and etcd cluster that st2 replies on for MQ, DB
and distributed coordination respectively.
I deployed stackstorm on both k3d and GKE but I had to use fast machines in order to deploy this quickly and successfully.
NAME: stackstorm
LAST DEPLOYED: Mon Jun 29 15:25:52 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Congratulations! You have just deployed StackStorm HA!

Installing kubernetes-dashboard with helm fails

I've just created a new kubernetes cluster. The only thing I have done beyond set up the cluster is install Tiller using helm init and install kubernetes dashboard through helm install stable/kubernetes-dashboard.
The helm install command seems to be successful and helm ls outputs:
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
exhaling-ladybug 1 Thu Oct 24 16:56:49 2019 DEPLOYED kubernetes-dashboard-1.10.0 1.10.1 default
However after waiting a few minutes the deployment is still not ready.
Running kubectl get pods shows that the pod's status as CrashLoopBackOff.
NAME READY STATUS RESTARTS AGE
exhaling-ladybug-kubernetes-dashboard 0/1 CrashLoopBackOff 10 31m
The description for the pod shows the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31m default-scheduler Successfully assigned default/exhaling-ladybug-kubernetes-dashboard to nodes-1
Normal Pulling 31m kubelet, nodes-1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Pulled 31m kubelet, nodes-1 Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Started 30m (x4 over 31m) kubelet, nodes-1 Started container kubernetes-dashboard
Normal Pulled 30m (x4 over 31m) kubelet, nodes-1 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
Normal Created 30m (x5 over 31m) kubelet, nodes-1 Created container kubernetes-dashboard
Warning BackOff 107s (x141 over 31m) kubelet, nodes-1 Back-off restarting failed container
And the logs show the following panic message
panic: secrets is forbidden: User "system:serviceaccount:default:exhaling-ladybug-kubernetes-dashboard" cannot create resource "secrets" in API group "" in the namespace "kube-system"
Am I doing something wrong? Why is it trying to create a secret somewhere it cannot?
Is it possible to setup without giving the dashboard account cluster-admin permissions?
Check this out mate:
https://akomljen.com/installing-kubernetes-dashboard-per-namespace/
You can create your own roles if you want to.
By default i have puted namespace equals default, but if is other you need to replace for yours
kubectl create serviceaccount exhaling-ladybug-kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=default:exhaling-ladybug-kubernetes-dashboard
based on the error you have posted what is happineening is:
1. helm is trying is install dashboard but by default it was picking up the namespace you have provided.
For solving that:
1. either you create roles based on the namespace you are trying to install, by default namespace should be: default
2. just install the helm chart in the proper location which is required by helm chart, in your case you can do:
helm install stable/kubernetes-dashboard --name=kubernetes-dashboard --namespace=kube-system
Try creating clusterrole
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard