I run
docker pull ghcr.io/.../test-service
Everything works just fine, however, when I try to use it in a deployment and apply the deployment to a Minikube instance I get...
Warning Failed 14s (x4 over 93s) kubelet Error: ErrImagePull
Normal BackOff 1s (x6 over 92s) kubelet Back-off pulling image "ghcr.io/.../test-service:latest"
Warning Failed 1s (x6 over 92s) kubelet Error: ImagePullBackOff
How do I configure Minikube to use my Github PAT?
My deployment looks like this...
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: foo
spec:
replicas: 1
selector:
matchLabels:
app: test-app
version: v1
template:
metadata:
labels:
app: test-app
version: v1
spec:
serviceAccountName: test-app
containers:
- image: ghcr.io/.../test-service:latest
imagePullPolicy: Always
name: test-app
ports:
- containerPort: 8000
It's because you did not create a docker registry secret. To do so, you can follow https://dev.to/asizikov/using-github-container-registry-with-kubernetes-38fb
Another simpler way is to add registry-creds addon to Minikube
$ minikube addons configure registry-creds
# setup ghcr.io with your github username and PAT (Personal Access Token) token
$ minikube addons enable registry-creds
Then you can refer to the github container registry creds as dpr-secret as below deployment example shows
# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: b2b-integration-api
labels:
app: api-app
namespace: default
spec:
template:
metadata:
name: b2b-integration-api-pods
labels:
app: api-app
tier: api-layer
spec:
containers:
- name: b2b-integration-api
image: ghcr.io/<your github user or org>/<image>:<version>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
imagePullSecrets:
- name: dpr-secret
replicas: 2
selector:
matchLabels:
tier: api-layer
Related
I would like to deploy a specific image in Kubernetes using skaffold buildpack. Everything is fine for doing the build but the deployment in Kubernetes failed because skaffold didn't use my dockerhub id as prefix, only skaffold-buidpacks is passed to silent kubectl command.
apiVersion: skaffold/v2beta21
kind: Config
build:
artifacts:
- image: systemdevformations/skaffold-buildpacks
buildpacks:
builder: "gcr.io/buildpacks/builder:v1"
trustBuilder: true
env:
- GOPROXY={{.GOPROXY}}
profiles:
- name: gcb
build:
googleCloudBuild: {}
DEBU[0027] Running command: [kubectl --context kubernetes-admin#kubernetes create --dry-run=client -oyaml -f /home/ubuntu/skaffold/examples/buildpacks/k8s/web.yaml] subtask=-1 task=DevLoop
DEBU[0027] Command output: [apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- name: http
port: 8080
selector:
app: web
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: default
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: skaffold-buildpacks
name: web
ports:
- containerPort: 8080
Kubernetes auto-generated script doesn't use docker hub image prefix
After setup my kubernetes cluster on GCP i used command kubectl scale deployment superappip--replicas=30 from google console to scale my deployments, but what should be added in my deployment file myip-service.yaml to do the same?
The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
you can follow more here.
For Kubernetes Deployment we can specify imagePullSecrets to allow it to pull Docker images from our private registry. But as far as I can tell, StatefulSet doesn't support this?
How can I supply a pullsecret to my StatefulSet?
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: {{ .Values.namespace }}
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
serviceName: redis-service
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: redis
spec:
terminationGracePeriodSeconds: 10
# imagePullSecrets not valid here for StatefulSet :-(
containers:
- image: {{ .Values.image }}
StatefulSet supports imagePullSecrets. You can check it as follows.
$ kubectl explain statefulset.spec.template.spec --api-version apps/v1
:
imagePullSecrets <[]Object>
ImagePullSecrets is an optional list of references to secrets in the same
namespace to use for pulling any of the images used by this PodSpec. If
specified, these secrets will be passed to individual puller
implementations for them to use. For example, in the case of docker, only
DockerConfig type secrets are honored. More info:
https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
:
For instance, you can try if the following sample StatefulSet can create in your cluster first.
$ kubectl create -f - <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
imagePullSecrets:
- name: YOUR-PULL-SECRET-NAME
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
EOF
$ kubectl get pod web-0 -o yaml | \
grep -E '^[[:space:]]+imagePullSecrets:' -A1
imagePullSecrets:
- name: YOUR-PULL-SECRET-NAME
I'am trying to set up the metrics to activate HPA (Horizontal Pod Auto-scaling)
I follow this tutorial only the Custom Metrics (Prometheus) .
Unfortunately when i execute the command below :
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[]}
I must to see a lot of thing on resources however there is nothing.
This might be the issue how you setup metrics-server and metrics-server could not able to find your resources on InternalIP.
The solution is to replace the metrics-server-deployment.yaml file in metrics-server/deploy/1.8+ with the following yaml file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
Also, enable the --authentication-token-webhook in kubelet.conf, then you will be able to get the metrics.
Also, checkout my answer for step by step instruction to set the HPA using metrics-server.
How to Enable KubeAPI server for HPA Autoscaling Metrics
Hope this helps. Revert back if you face any issues.
I created a deployment like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: scs-db-sink
spec:
selector:
matchLabels:
app: scs-db-sink
replicas: 1
template:
metadata:
labels:
app: scs-db-sink
spec:
nodeSelector:
cloud.google.com/gke-nodepool: service-pool
containers:
- name: scs-db-sink
image: 'IMAGE_NAME'
imagePullPolicy: Always
ports:
- containerPort: 1068
kubectl get pods shows me that the pod is running:
scs-db-sink-74c4b6cd6b-tchm9 1/1 Running 0 16m
Question:
How can I setup the pod name to be scs-db-sink-0 and increase to scs-db-sink-1 when I scale up?
Thanks
Deployments pods are named as <replicaset-name>-<random-suffix> where replicaset name is <deployment-name>-<random-suffix>. Here, replicaset is created automatically by deployment. So, you can't achieve your expected name with deployment.
However, you can use Statefulset in this case. Statefulset's pods are named as you specified. Check about Statefulset here.