How do I set the imagePullPolicy with Minikube - kubernetes

I am new to Kubernetes. I am trying to follow this tutorial that instructs me on how to use minikube to setup a local service. I was able to get things running with the $ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080 service from the tutorial. Huzzah!
Now I want to run a server with a locally tagged-and-built Docker image. According to this post all I need to do is tell my computer to use the minikube docker daemon, build my image, and set the imagePullPolicy to never.
How and where do I set the imagePullPolicy with minikube? I've googled around and while there's plenty of results, my "babe in the woods" status with K8 leads to information overload. (i.e. the simpler your answer the better)

You have to edit your Deployment (kubectl run creates a deployment). The spec would look something like this:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: hello-minikube
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-minikube
spec:
containers:
- image: k8s.gcr.io/echoserver:1.10 <-- change to the right image
imagePullPolicy: IfNotPresent <-- change to Always
name: hello-minikube
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Edit with:
$ kubectl edit deployment hello-minikube

Related

kubernetes k8 unable to pull latest image

Hi I am working in Kubernetes. Below is my k8 for deployment.
apiVersion: apps/v1
kind: Deployment
metadata: #Dictionary
name: webapp
spec: # Dictionary
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
# maxUnavailable will set up how many pods we can add at a time
maxUnavailable: 50%
# maxSurge define how many pods can be unavailable during the rolling update
maxSurge: 1
selector:
matchLabels:
app: webapp
instance: app
template:
metadata: # Dictionary
name: webapplication
labels: # Dictionary
app: webapp # Key value paids
instance: app
annotations:
vault.security.banzaicloud.io/vault-role: al-dev
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers: # List
- name: al-webapp-container
image: ghcr.io/my-org/al.web:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
memory: "1Gi"
cpu: "900m"
limits:
memory: "1Gi"
cpu: "1000m"
imagePullSecrets:
- name: githubpackagesecret
whenever I deploy this into kubernetes, Its not picking the latest image from the github packages. What should I do in order to pull the latest image and update the current running pod with latest image? Can someone help me to fix this issue. Any help would be appreciated. Thank you
There could be chances if you are doing deployment with the same latest tag deployment might not be getting the updated as same imageTag.
Pod restart is required so it will download the new image each time, if still it's the same there is an issue with building of the cache image.
What you can do as of now to try the
kubectl rollout restart deploy <deployment-name> -n <namespace>
this will restart the pods and it will fetch the new image for all PODs and check if latest code running.
Since you have imagePullPolicy: Always set it should always pull the image. Can you do kubectl describe to the pod while its starting so we can seed the logs ?

(Again) GKE Fails to mount volumes to deployment/pods: timeout waiting for the condition

Almost two years later, we are experiencing the same issue as described in this SO post.
Our workloads had been working without any disruption since 2018, and they suddenly stopped because we had to renew certificates. Then we've not been able to start the workloads again... The failure is caused by the fact that pods try to mount a persistence disk via NFS, and the
nfs-server pod (based on gcr.io/google_containers/volume-nfs:0.8) can't mount the persistent disk.
We have upgraded from 1.23 to 1.25.5-gke.2000 (experimenting with a few intermediary previous) and hence have also switched to containerd.
We have recreated everything multiple times with slight varioations, but no luck. Pods definitely cannot access any persistent disk.
We've checked basic things such as: the persistent disks and cluster are in the same zone as the GKE cluster, the service account used by the pods has the necessary permissions to access the disk, etc.
No logs are visible on, each pod, which is also strange since logging seems to be correctly configured.
Here is the nfs-server.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
role: nfs-server
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- image: gcr.io/google_containers/volume-nfs:0.8
imagePullPolicy: IfNotPresent
name: nfs-server
ports:
- containerPort: 2049
name: nfs
protocol: TCP
- containerPort: 20048
name: mountd
protocol: TCP
- containerPort: 111
name: rpcbind
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /exports
name: webapp-disk
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: webapp-data-disk
name: webapp-disk
status: {}
OK, fixed. I had to enable the CI driver on our legacy cluster, as described here...

How to find non-portable fields from existing kubernetes resources configuration?

Cluster information:
Kubernetes version: v1.12.8-gke.10 on GCP
Question:
I’m doing application migration now. The thing I do is to grab all configurations of related resources and then deploy them to a new cluster. After getting information from shell command kubectl get <resource> -o yaml, I noticed that there is a lot of information that my deploy YAMLs don’t have.
I deleted .spec.clusterIP, .metadata.uid, .metadata.selfLink, .metadata.resourceVersion, .metadata.creationTimestamp, .metadata.generation, .status, .spec.template.spec.securityContext, .spec.template.spec.dnsPolicy, .spec.template.spec.terminationGracePeriodSeconds, .spec.template.spec.restartPolicy fields.
I’m not sure is there other fields that will influence the new deployment I need to delete?
Is there a way to find all non-portable fields that I can delete?
And another question is: do all related resources matter? For now I just grab a list of resources from kubectl api-resources and then get info of them one by one. Should I ignore some resources like ReplicaSet to migrate the whole application?
For example, output configuration of nginx deployment will be like:
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-07-16T21:55:39Z"
generation: 1
labels:
app: nginx
name: nginx-deployment
namespace: default
resourceVersion: "1482081"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
uid: 732377ee-a814-11e9-bbe9-42010a8a001a
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.7.9
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2019-07-16T21:55:41Z"
lastUpdateTime: "2019-07-16T21:55:41Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-07-16T21:55:39Z"
lastUpdateTime: "2019-07-16T21:55:41Z"
message: ReplicaSet "nginx-deployment-5c689d88bb" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2```
Right off the bat, there is no way to detect which fields are cluster-specific automatically, the kubectl get [resource] -o yaml is outputting the current RESTful state of the resource. However, you can use some linux bash to manipulate the ouput of a cluster dump to get the fields you want. Take a look at this blog post on medium.
As to the "do all resources matter" the answer is no. If you have a deployment, you don't need the replicaSet or the pod resources since the deployment will manage and create those once it is deployed. You just need the top level controller resource (same thing does for daemonsets and statefulsets).
On another note, the fields from the spec section can mostly all be kept, the values that you are removing are likely default values you never set initially but there is no real benefit in removing them.

Mounting kubernetes volume in multiple containers within a pod with gitlab

I am setting up a CI/CD environment for the first time consisting of a single node kubernetes (minikube).
On this node I created a PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-volume 1Gi RWO Retain Bound gitlab-managed-apps/data-volume-claim manual 20m
and PVC
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-claim Bound data-volume 1Gi RWO manual 19m
Now I would like to create a pod with multiple containers accessing to this volume.
Where and how do you advise to setup this using gitlab pipelines gitlab-ci etc? Multiple repos may be the best fit for the project.
Here is the fully working example of deployment manifest file, having in Pod's spec defined two containers (based on different nginx docker images) using the same PV, from where they serve custom static html content on ports 80/81 accordingly:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: nginx
name: nginx
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
volumes:
- name: my-pv-storage
persistentVolumeClaim:
claimName: my-pv-claim-nginx
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html_custom
- image: custom-nginx
imagePullPolicy: IfNotPresent
name: custom-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
Yes probaly you can do that run multiple container in one pod sharing the one PVC.
In CI/CD if you have multiple repos and if commit comes in one repo it will build new Docker image and push it to the registry and deployed to k8s cluster.
In CI/CD if you have the plan to use latest tag for image tagging then you can use multi-container in pod. it will be easy to manage deployment if there is commit in only one repository.
If you have plan to use SHA:hash for CI/CD-tagging images then how will you manage the deployment file having two containers config.

K8s Create Deployment with EnvFrom

I am trying to fire up an influxdb instance on my cluster.
I am following a few different guides and am trying to get it to expose a secret as environment variables using the envFrom operator. Unfortunately I am always getting the Environment: <none> after doing my deployment. Doing an echo on the environment variables I expect yields a blank value as well.
I am running this command to deploy (the script below is in influxdb.yaml): kubectl create deployment influxdb --image=influxdb
Here is my deployment script:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
generation: 1
labels:
app: influxdb
project: pihole
name: influxdb
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: influxdb
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: influxdb
spec:
containers:
- name: influxdb
envFrom:
- secretRef:
name: influxdb-creds
image: docker.io/influxdb:1.7.6
imagePullPolicy: IfNotPresent
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/influxdb
name: var-lib-influxdb
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: var-lib-influxdb
persistentVolumeClaim:
claimName: influxdb
status: {}
The output of kubectl describe secret influxdb-creds is this:
Name: influxdb-creds
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
INFLUXDB_USERNAME: 4 bytes
INFLUXDB_DATABASE: 6 bytes
INFLUXDB_HOST: 8 bytes
INFLUXDB_PASSWORD: 11 bytes
to test your deployment, please first create secrets and later create deployment:
1. Secrets:
kubectl create secret generic influxdb-creds --from-literal=INFLUXDB_USERNAME='test_user' --from-literal=INFLUXDB_DATABASE='test_password'
2. Deployment:
kubectl apply -f <path_to_your_yaml_file>
In order to verify, please run
kubectl describe secret influxdb-creds
kubectl exec <your_new_deployed_pod> -- env
kubectl describe pod <your_new_deployed_pod>
Take a look at:
Environment Variables from:
influxdb-creds Secret Optional: false
Hope this help.
Please share with your findings.
The answer to this is that I was creating the deployment incorrectly. I was using the command kubectl create deployment influxdb --image=influxdb which was creating a blank deployment and instead I should have been creating it with kubectl create -f influxdb.yaml where influxdb.yaml was my file that contained the deployment definition in the original question.
I was making the false assumption that the create deployment command read the yaml file by the same name, but it does not.