How to pull images on kubernetes on every deployment - kubernetes

Hi i am trying to deploy on my gke cluster through cloud build.I am able to deploy. But every time i am pushing new images.My cluster is not picking up the new image but deploy the pod with the old image only(nothing is changed).When i am deleting my pod and triggering the cloudbuild then it is picking the new image. I have also added ImagePullPolicy= Always.
Below is my cloudbuild.yaml file.
- id: 'build your instance'
name: 'maven:3.6.0-jdk-8-slim'
entrypoint: mvn
args: ['clean','package','-Dmaven.test.skip=true']
- id: "docker build"
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PID/test', '.']
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PID/TEST']
- id: 'Deploy image to kubernetes'
name: 'gcr.io/cloud-builders/gke-deploy'
args:
- run
- --filename=./run/helloworld/src
- --location=us-central1-c
- --cluster=cluster-2
My pod manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: Test
labels:
app: hello
spec:
containers:
- name: private-reg-containers
image: gcr.io/PID/test
imagePullPolicy: "Always"
Any help is appreciated.

This is an expected behavior and you may be confusing the usage of imagePullPolicy: "Always". This is well explanined in this answer:
Kubernetes is not watching for a new version of the image. The image pull policy specifies how to acquire the image to run the container. Always means it will try to pull a new version each time it's starting a container. To see the update you'd need to delete the Pod (not the Deployment) - the newly created Pod will run the new image.
There is no direct way to have Kubernetes automatically update running containers with new images. This would be part of a continuous delivery system (perhaps using kubectl set image with the new sha256sum or an image tag - but not latest).
This is why when you recreate the pods, those get the newest image. So the answer to your question is to explicitly tell K8s to get the newest image. In the example I share with you I use two tags, the clasic latest which is more used to share the image with a friendly name and the tag using the $BUILD_ID which is used to update the image in GKE. In this example I update the image for a deployment so you only change it for updating an standalone pod which should be your little "homework".
steps:
#Building Image
- name: 'gcr.io/cloud-builders/docker'
id: build-loona
args:
- build
- --tag=${_LOONA}:$BUILD_ID
- --tag=${_LOONA}:latest
- .
dir: 'loona/'
waitFor: ['-']
#Pushing image (this pushes the image with both tags)
- name: 'gcr.io/cloud-builders/docker'
id: push-loona
args:
- push
- ${_LOONA}
waitFor:
- build-loona
#Deploying to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
id: deploy-gke
args:
- run
- --filename=k8s/
- --location=${_COMPUTE_ZONE}
- --cluster=${_CLUSTER_NAME}
#Update Image
- name: 'gcr.io/cloud-builders/kubectl'
id: update-loona
args:
- set
- image
- deployment/loona-deployment
- loona=${_LOONA}:$BUILD_ID
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
waitFor:
- deploy-gke
substitutions:
_CLUSTER_NAME: my-cluster
_COMPUTE_ZONE: us-central1
_LOONA: gcr.io/${PROJECT_ID}/loona

Related

Kubernetes Podspec to download only container image but it should not install

I want to download the container image, but dont want to deploy/install the image.
How can i deploy podspec to download only images but it should not create container.
Any podspec snapshot for this?
As far as I know there is no direct Kubernetes resources to only download an image of your choosing. To have the images of your applications on your Nodes you can consider using following solutions/workarounds:
Use a Daemonset with an initContainer('s)
Use tools like Ansible to pull the images with a playbook
Use a Daemonset with InitContainers
Assuming the following situation:
You've created 2 images that you want to have on all of the Nodes.
You can use a Daemonset (spawn a Pod on each Node) with initContainers (with images as source) that will run on all Nodes and ensure that the images will be present on the machine.
An example of such setup could be following:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pull-images
labels:
k8s-app: pull-images
spec:
# AS THIS DAEMONSET IS NOT SUPPOSED TO SERVE TRAFFIC I WOULD CONSIDER USING THIS UPDATE STRATEGY FOR SPEEDING UP THE DOWNLOAD PROCESS
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100
selector:
matchLabels:
name: pull-images
template:
metadata:
labels:
name: pull-images
spec:
initContainers:
# PUT HERE IMAGES THAT YOU WANT TO PULL AND OVERRIDE THEIR ENTRYPOINT
- name: ubuntu
image: ubuntu:20.04 # <-- IMAGE #1
imagePullPolicy: Always # SPECIFY THE POLICY FOR SPECIFIC IMAGE
command: ["/bin/sh", "-c", "exit 0"]
- name: nginx
image: nginx:1.19.10 # <-- IMAGE #2
imagePullPolicy: IfNotPresent # SPECIFY THE POLICY FOR SPECIFIC IMAGE
command: ["/bin/sh", "-c", "exit 0"]
containers:
# MAIN CONTAINER WITH AS SMALL AS POSSIBLE IMAGE SLEEPING
- name: alpine
image: alpine
command: [sleep]
args:
- "infinity"
Kubernetes Daemonset controller will ensure that the Pod will run on each Node. Before the image is run, the initContainers will act as a placeholders for the images. The images that you want to have on the Nodes will be pulled. The ENTRYPOINT will be overridden to not run the image continuously. After that the main container (alpine) will be run with a sleep infinity command.
This setup will also work when the new Nodes are added.
Following on that topic I would also consider checking following documentation on imagePullPolicy:
Kubernetes.io: Docs: Concepts: Containers: Images: Updating images
A side note!
I've set the imagePullPolicy for the images in initContainers differently to show you that you can specify the imagePullPolicy independently for each container. Please use the policy that suits your use case the most.
Use tools like Ansible to pull the images with a playbook
Assuming that you have SSH access to the Nodes you can consider using Ansible with it's community module (assuming that you are using Docker):
community.docker.docker_image
Citing the documentation for this module:
This plugin is part of the community.docker collection (version 1.3.0).
To install it use: ansible-galaxy collection install community.docker.
Synopsis
Build, load or pull an image, making the image available for creating containers. Also supports tagging an image into a repository and archiving an image to a .tar file.
-- Docs.ansible.com: Ansible: Collections: Community: Docker: Docker image module
You can use it with a following example:
hosts.yaml
all:
hosts:
node-1:
ansible_port: 22
ansible_host: X.Y.Z.Q
node-2:
ansible_port: 22
ansible_host: A.B.C.D
playbook.yaml
- name: Playbook to download images
hosts: all
user: ENTER_USER
tasks:
- name: Pull an image
community.docker.docker_image:
name: "{{ item }}"
source: pull
with_items:
- "nginx"
- "ubuntu"
A side note!
In the ansible way, I needed to install docker python package:
$ pip3 install docker
Additional resources:
Kubernetes.io: Docs: Concepts: Workloads: Controllers: Daemonset
Kubernetes.io: Docs: Concepts: Workloads: Pods: initContainers

Copy file inside Kubernetes pod from another container

I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.

Volume shared between two containers "is busy or locked"

I have a deployment that runs two containers. One of the containers attempts to build (during deployment) a javascript bundle that the other container, nginx, tries to serve.
I want to use a shared volume to place the javascript bundle after it's built.
So far, I have the following deployment file (with irrelevant pieces removed):
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
To the best of my ability, I have followed these guides:
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
One other things to point out is that I'm trying to run this locally atm using minikube.
EDIT: The Dockerfile I used to build this image is:
FROM node:alpine
WORKDIR /var/app
COPY . .
RUN npm install
RUN npm install -g #vue/cli#latest
CMD ["npm", "run", "build"]
I realize that I do not need to build this when I actually run the image, but my next goal is to insert pod instance information as environment variables, so with javascript unfortunately I can only build once that information is available to me.
Problem
The logs from the personal-site container reveal:
- Building for production...
ERROR Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
I'm not sure why the build is trying to remove /dist, but also have a feeling that this is irrelevant. I could be wrong?
I thought that maybe this could be related to the lifecycle of containers/volumes, but the docs suggest that "An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node".
Question
What are some reasons that a volume might not be available to me after the containers are already running? Given that you probably have much more experience than I do with Kubernetes, what would you look into next?
The best way is to customize your image's entrypoint as following:
Once you finish building the /var/app/dist folder, copy(or move) this folder to another empty path (.e.g: /opt/dist)
cp -r /var/app/dist/* /opt/dist
PAY ATTENTION: this Step must be done in the script of ENTRYPOINT not in the RUN layer.
Now use /opt/dist instead..:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /opt/dist # <--- make it consistent with image's entrypoint algorithm
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
Good luck!
If it's not clear how to customize the entrypoint, share with us your entrypoint of the image and we will implement it.

How to run binary using kuberneates config-map

I used config map with files but i am experimenting with portable services like supervisor d and other internal tools.
we have golang binary that can be run in any image. what i am trying is to run these binary using configmap.
Example :-
We have a internal tool written in Go(size is less than 7MB) can be store in config map and we want to mount that config map inside kuberneates pod and want to run it inside pod
Question :- does anyone use it ? Is it a good approach ? What is the best practice ?
I don't believe you can put 7MB of content in a ConfigMap. See here for example. What you're trying to do sounds like a very unusual practice. The standard practice to run binaries in Pods in Kubernetes is to build a container image that includes the binary and configure the image or the Pod to run that binary.
I too faced similar issue while storing elastic.jks keystore binary file in k8s pod.
AFAIK there are two options:
Make use of configmap to store binary data. Check this out.
OR
Store your binary file remotely somewhere like in s3 bucket and pull that binary before running actual pod using initContainers concept.
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- name: myapp-container
image: alpine:3.1
command: ['sh', '-c', 'if [ -f /jks/elastic.jks ]; then sleep 99999; fi']
volumeMounts:
- name: jksdata
mountPath: /jks
initContainers:
- name: init-container
image: atlassian/pipelines-awscli
command: ["/bin/sh","-c"]
args: ['aws s3 sync s3://my-artifacts/$CLUSTER /jks/']
imagePullPolicy: IfNotPresent
volumeMounts:
- name: jksdata
mountPath: /jks
env:
- name: CLUSTER
value: dev-elastic
volumes:
- name: jksdata
emptyDir: {}
restartPolicy: Always
As #amit-kumar-gupta mentioned the configmap size constraint.
I recommend the second way.
Hope this helps.

run kubernetes job in cloud builder

I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
Now I want to create a job , I want to run something like
kubectl create -R -f ./kubernetes
which creates job in kubernetes folder.
I know cloud builder has - name: 'gcr.io/cloud-builders/kubectl' but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json
I wasn't able to connect and get cluster credentials. Here's what I did
Go to IAM, add another Role to xyz#cloudbuild.gserviceaccount.com. I used Project Editor.
Wrote this on cloudbuild.yaml name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']