Helm on Minikube: update local image - kubernetes-helm

I have Minikube (v1.1.0) running locally with Helm (v2.13.1) initialized and connected the local docker daemon with Minikube running eval $(minikube docker-env). In the code base of my application I created a chart with helm create chart. The first few lines of ./chart/values.yml I changed to:
image:
repository: app-development
tag: latest
pullPolicy: Never
I build the image locally and install/upgrade the chart with Helm:
docker build . -t app-development
helm upgrade --install example ./chart
Now, this works perfect the first time, but if I make changes to the application I would like to run the above two commands to upgrade the image. Is there any way to get this working?
workaround
To get the expected behaviour I can delete the chart from Minikube and install it again:
docker build . -t app-development
helm del --purge example
helm install example ./chart

When you make a change like this, Kubernetes is looking for some change in the Deployment object. If it sees that you want 1 Pod running app-development:latest, and it already has 1 Pod running an image named app-development:latest, then it's in the right state and it doesn't need to do anything (even if the local image that has that tag has changed).
The canonical advice here is to never use the :latest tag with Kubernetes. Every time you build an image, use a distinct tag (a time stamp or the current source control commit ID are easy unique things). With Helm it's easy enough to inject this based on a value you pass in:
image: app-development:{{ .Values.tag | default "latest" }}
This sort of build sequence would look a little more like
TAG=$(date +%Y%m%d-%H%m%S)
docker build -t "app-development:$TAG" .
helm upgrade --install --set "tag=$TAG"
If you're actively developing your component you may find it easier to try to separate out "hacking on code" from "deploying into Kubernetes" as much as you can. Some amount of this tends to be inevitable, but Kubernetes really isn't designed to be a live development environment.

One way you could solve this problem is using minikube and cloud code from google. When you initialize cloud code in your project, it creates skaffold yaml at root location. You can put helm chart for same project in the same code base. Go ahead and edit this configuration to match folder location for the helm chart:
deploy: helm:
releases:
- name: <chart_name>
chartPath: <folder path relative to this file>
now when you click on cloud code at the bottom of your visual code editor (or any editor), it should give you following options:
[1]: https://i.stack.imgur.com/vXK4U.png
Select "Run on Kubernetes" from the list.
Only changes you'll have to do in your helm chart is to read image url from Skaffold yaml using profile.
profiles:
- name: prod
deploy:
helm:
releases:
- name: <helm_chart_name>
chartPath: helm
skipBuildDependencies: true
artifactOverrides:
image: <url_production_image_url>
This will read image from configured url whereas in local, it should read from docker daemon. Cloud code also provide hot update / deployment when you make any changes to any file though.No need to always mention image tag while testing it locally. Once you're good with the code, update the image with latest version number which should trigger deployment in your integration / dev environment.

Related

Helm rollback to previous build is not reflecting in deployments [duplicate]

I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine).
The pullPolicy is Always.
Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried helm upgrade but it seems like nothing happened.
With helm list I can see that revision was deployed but the changes to source code were not deployed.
watch kubectl get pods also shows that no new pods were created the way you expect it with kubectl --apply...
What did I do wrong?
Helm will roll out changes to kubernetes objects only if there are changes to roll out. If you use :latest there is no change to be applied to the deployment file, ergo no pods will rolling update. To keep using latest, you need to add something (ie. label with sha / version) that will change and cause deployment to get updated by helm. Also keep in mind that you will usualy need ImagePullPolicy: Always as well.
Possible workaround:
spec:
template:
metadata:
labels:
date: "{{ now | unixEpoch }}"
Add it to your Deployment or StatefulSet yaml
It's worth noting that there's nothing special about the 'latest' tag. In other words it doesn't mean what we would normally think, i.e. "the most recent version".
It's just string of characters from a container standpoint. It could be anything, like "blahblah".
The runtime (docker or kubernetes) will just look to see if it has an image with that tag and only get the new image if that tag doesn't exist.
Given that "latest" doesn't actually mean anything, best practice if you want to be updating images constantly, is to use the actual version of the code itself as an image tag. And then when deploying, have your infrastructure specifically deploy the newest version using the correct tag.
The way I solved this in the deployment script in .gitlab.yaml, you can do similar in any of your deployment scripts.
export SAME_SHA=$(helm get values service-name | grep SHA | wc -l)
if [ SAME_SHA -eq 1] then helm uninstall service-name; fi
helm upgrade --install service-name -f service-values.yml .
This may not be the best approach for production as you may end up uninstall a live service, but for me, production sha are never the same so this works.

How can I restart only one service by using skaffold?

I use skaffold for k8s based microservices app. I enter skaffold dev and skaffold run to run and skaffold delete to restart all microservices.
If I need to restart only one service, what must I do?
According to the docs:
Use skaffold dev to build and deploy your app every time your code changes,
Use skaffold run to build and deploy your app once, similar to a CI/CD pipeline
1. Deploy your services:
skaffold run --filename=skaffold_test_1.yaml
(in addition you can have multiple workflow configurations).
2. Change your skaffold workflow configuration and run:
skaffold delete --filename=skaffold_test2.yaml
Using this approach your deployments will be not removed like in the skaffold dev command after stopping skaffold.
Basically managing the content of the skaffold workflow configuration (by adding or removing additional entries allows you to deploy or remove particular service).
apiVersion: skaffold/v1
kind: Config
.
.
.
deploy:
kubectl:
manifests:
- k8s-service1.yaml
# - k8s-service2.yaml
You can use skaffold dev's --watch-image flag to restrict the artifacts to monitor. This takes a comma-separated list of images, specified by the artifact.image.

How Do I Get Skaffold And Helm Charts To Work With A Local Image Repository?

We're trying to set up a local development environment with several microservices app under Skaffold. We managed to do it with base Skaffold, using a (slightly outdated) tutorial at https://github.com/ahmetb/skaffold-from-laptop-to-cloud. And to get Skaffold to push images to a local repository without Helm, all I had to do was set up the imageName to use something like localhost:5000/image_name.
But with Helm, well.... I set up a very crude Helm install (DISCLAIMER: I am not much familiar with Helm yet), just changing the skaffold YAML to use Helm and dumping all the .YAML deployment and service files into the Helm chart's /templates directory, and that bombed.
Skaffold then successfully creates any pods that rely on a stock external image (like redis), but then whenever anything uses an image that would be generated from a local Dockerfile, it gets stuck and throws this error:
Failed to pull image "localhost:5000/k8s-skaffold/php-test": rpc
error: code = Unknown desc = Error response from daemon: Get
http://localhost:5000/v2/: dial tcp [::1]:5000: connect: connection
refused
As far as I can tell, that's the error that comes when we haven't initialized a local Docker image repository - but with the non-Helm version, we don't need to start up a local image repository, Skaffold just makes that magic happen. Which is part of the appeal OF Skaffold.
So how do we automagically get Skaffold to create Helm charts that create and pull from a local repository? (As noted, this may be my unfamiliarity with Helm. If so, I apologize.)
The Skaffold YAML is this:
apiVersion: skaffold/v1beta7
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- image: localhost:5000/k8s-skaffold/php-test
context: voting-app/php-test
deploy:
helm:
releases:
- name: php-help-test
chartPath: helm
#wait: true
#valuesFiles:
#- helm-skaffold-values.yaml
values:
image: localhost:5000/k8s-skaffold/php-test
#recreatePods will pass --recreate-pods to helm upgrade
#recreatePods: true
#overrides builds an override values.yaml file to run with the helm deploy
#overrides:
# some:
# key: someValue
#setValues get appended to the helm deploy with --set.
#setValues:
#some.key: someValue
And the Helm Chart values.yaml is the default provided by a generated chart. I can also provide the Dockerfile if needed, but it's just pulling from that image.
You can't use localhost in your image definition. For the sake of testing you can try to use the ip of the host where your private registry is running, say if the host has address 222.0.0.2, then use image: 222.0.0.2:5000/k8s-skaffold/php-test.
It is of course undesirable to hard-code an address so a better way is to omit the "host" part entirely;
image: k8s-skaffold/php-test:v0.1
In this case your CRI (Container Runtime Interface) plugin will try a sequence of servers, for instance docker.io. The servers are configurable but unfortunately I don't know how to configure it for "docker" since I use cri-o myself.

Dockerhub registry Image accessing from Helm Chart using deployment YAML file

I am trying to implement the CI/CD pipeline for my microservice by using Jenkins, Kubernetes and Kubernetes Helm. Here I am using Helm chart for packaging of YAML files and deployment into Kubernetes cluster. I am now learning the implementation of Helm chart and deployment. When I am learning, I found the image name definition in deployment YAML file.
I have two questions:
If we only defining the image name, then it will automatically pull from Docker Hub? Or do we need to define additionally anything in the deployment chart YAML file for pulling?
How the Helm Tiller communicating with Docker Hub registry?
Docker image names in Kubernetes manifests follow the same rules as everywhere else. If you have an image name like postgres:9.6 or myname/myimage:foo, those will be looked up on Docker Hub like normal. If you're using a third-party repository (Google GCR, Amazon ECR, quay.io, ...) you need to include the repository name in the image name. It's the exact same string you'd give to docker run or docker build -t.
Helm doesn't directly talk to the Docker registry. The Helm flow here is:
The local Helm client sends the chart to the Helm Tiller.
Tiller applies any templating in the chart, and sends it to the Kubernetes API.
This creates a Deployment object with an embedded Pod spec.
Kubernetes creates Pods from the Deployment, which have image name references.
So if your Helm chart names an image that doesn't exist, all of this flow will run normally, until it creates Pods that wind up in ImagePullBackOff state.
P.S.: if you're not already doing this, you should make the image tag (the part after the colon) configurable in your Helm chart, and declare your image name as something like myregistry.io/myname/myimage:{{ .Values.tag }}. Your CD system can then give each build a distinct tag and pass it into helm install. This makes it possible to roll back fairly seamlessly.
Run the command below. It will generate blank chart with values.yaml, add key value pare inside values.yaml and use them in your deployment.yaml file as variable.
helm create mychart

Helm upgrade doesn't pull new container

I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine).
The pullPolicy is Always.
Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried helm upgrade but it seems like nothing happened.
With helm list I can see that revision was deployed but the changes to source code were not deployed.
watch kubectl get pods also shows that no new pods were created the way you expect it with kubectl --apply...
What did I do wrong?
Helm will roll out changes to kubernetes objects only if there are changes to roll out. If you use :latest there is no change to be applied to the deployment file, ergo no pods will rolling update. To keep using latest, you need to add something (ie. label with sha / version) that will change and cause deployment to get updated by helm. Also keep in mind that you will usualy need ImagePullPolicy: Always as well.
Possible workaround:
spec:
template:
metadata:
labels:
date: "{{ now | unixEpoch }}"
Add it to your Deployment or StatefulSet yaml
It's worth noting that there's nothing special about the 'latest' tag. In other words it doesn't mean what we would normally think, i.e. "the most recent version".
It's just string of characters from a container standpoint. It could be anything, like "blahblah".
The runtime (docker or kubernetes) will just look to see if it has an image with that tag and only get the new image if that tag doesn't exist.
Given that "latest" doesn't actually mean anything, best practice if you want to be updating images constantly, is to use the actual version of the code itself as an image tag. And then when deploying, have your infrastructure specifically deploy the newest version using the correct tag.
The way I solved this in the deployment script in .gitlab.yaml, you can do similar in any of your deployment scripts.
export SAME_SHA=$(helm get values service-name | grep SHA | wc -l)
if [ SAME_SHA -eq 1] then helm uninstall service-name; fi
helm upgrade --install service-name -f service-values.yml .
This may not be the best approach for production as you may end up uninstall a live service, but for me, production sha are never the same so this works.