How can I restart only one service by using skaffold? - kubernetes

I use skaffold for k8s based microservices app. I enter skaffold dev and skaffold run to run and skaffold delete to restart all microservices.
If I need to restart only one service, what must I do?

According to the docs:
Use skaffold dev to build and deploy your app every time your code changes,
Use skaffold run to build and deploy your app once, similar to a CI/CD pipeline
1. Deploy your services:
skaffold run --filename=skaffold_test_1.yaml
(in addition you can have multiple workflow configurations).
2. Change your skaffold workflow configuration and run:
skaffold delete --filename=skaffold_test2.yaml
Using this approach your deployments will be not removed like in the skaffold dev command after stopping skaffold.
Basically managing the content of the skaffold workflow configuration (by adding or removing additional entries allows you to deploy or remove particular service).
apiVersion: skaffold/v1
kind: Config
.
.
.
deploy:
kubectl:
manifests:
- k8s-service1.yaml
# - k8s-service2.yaml

You can use skaffold dev's --watch-image flag to restrict the artifacts to monitor. This takes a comma-separated list of images, specified by the artifact.image.

Related

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

How to run script which start kubernetes cluster on azure devops

I tried to start #kubernetes cluster and then run tests and publish results. Do you have any idea on how this can be done?
I created a pipeline but I do not know which yml to use
Which task to add first - kubernetes deploy or something else
We have Kubernetes deployment.yml file>>It takes the container( image )(exampleacr.io/sampleapp) that we going to publish on AKS
App version: app/v1
Service.yml is to just expose the application App version: v1
Both the yml files are to be added .Please refer WAY2 for modifying manually.
WAY 1:
Quick way: Deploy to Azure Kubernetes service will do everything that’s needed because if you use the Deploy to Azure Kubernetes Services template, these variables get defined for you.
Steps:
Create AKS cluster and ACR(container registry) in azure.
In azure Devops:
Create a pipeline>choose any source:for select an application
hosted in git hub
Then select Deploy to Azure Kubernetes service >select your aks
subscription> select the existing cluster>then select the container
registry that you want to put docker image into.keep the remaining
as default
Click on validate and configure
azure pipeline will generate a YAML file.
In the review pipeline YAML of azure-pipelines.yml
You have two stages:Build,deploy
Click save and run:This saves yaml file in master branch and it creates manifests files(deployment.yml and service.yml) for kubernetes deployment.
Click save and run>this will also trigger build.
Reference
WAY2: Using Docker image
To do modifications in the azurepipelines.yml file In the 3rd step from above ,Select Docker image instead of Deploy to Azure Kubernetes service.
Under configure pipeline>If it is in Build.SourcesDirectory in our
application , it will appear as say $
Build.SourcesDirectory/app/Dockerfile
That builds the docker file /pipeline.
In the review pipeline YAML of azure-pipelines.yml
Few things can be modified like
You can change variable tag to repo name: and then deployment and
service.yml file can be added to the yml file by doing few
modifications
Build stage is automatic and no need to modify there.
You have to Add push and deploy stages in yml file as shown in the article
And get source code here

Skaffold dev stream logs of pods created by helm hooks

I would like to see the output from my pre-install/post-install helm hooks when using skaffold dev, but this does not seem to work.
Which filters does skaffold use to get all the pods for log tailing? Is there a way to force skaffold to pick up the hooks by applying some labels (e.g. skaffold.dev/run-id: static) ?
Context
Doing dev with local docker, the image building is pretty fast, so for some use cases there is no need to use file sync and special dev-mode container images with file watching inside.
There is this feature request: https://github.com/GoogleContainerTools/skaffold/issues/1441, but this is for adding hooks to skaffold itself.
The pods created by helm hooks are not removed (https://github.com/GoogleContainerTools/skaffold/issues/2876), but this is expected behavior for helm delete.
Thanks #acristu for the question. Skaffold dev here.
Currently, skaffold is unaware of pods deployed in the pre and post helm hooks.
The reason, we don't parse the manifests in these hooks and hence can't transform those to add the required label skaffold.dev/run-id
Currently there is no way to force skaffold to pick up the logs from these pods/containers
That said we had a pending feature request to extend the current log configuration to include resourceType or resourceName like portForward section
portForward: # describes user defined resources to port-forward.
- resourceType: # Kubernetes type that should be port forwarded.
resourceName:
Supporting this in skaffold would be great idea.

Tie skaffold profile to cluster

Building off another one of my questions about tying profiles to namespaces, is there a way to tie profiles to clusters?
I've found a couple times now that I accidentally run commands like skaffold run -p local -n skeleton when my current kubernetes context is pointing to docker-desktop. I'd like to prevent myself and other people on my team from committing the same mistake.
I found that there's a way of specifying contexts but that doesn't play nicely if developers use custom contexts like kubeclt set-context custom --user=custom --cluster=custom. I've also found a cluster field in the skaffold.yaml reference but it seems that doesn't satisfy my need because it doesn't let me specify a cluster name.
After digging through the skaffold documentation and performing several tests I finally managed to find at least partial solution of your problem, maybe not the most elegant one, but still functional. If I find a better way I will edit my answer.
Let's start from the beginning:
As we can read here:
When interacting with a Kubernetes cluster, just like any other
Kubernetes-native tool, Skaffold requires a valid Kubernetes context
to be configured. The selected kube-context determines the Kubernetes
cluster, the Kubernetes user, and the default namespace. By default,
Skaffold uses the current kube-context from your kube-config file.
This is quite important point as we are actually starting from kube-context and based on it we are able to trigger specific profile, never the oposite.
important to remember: kube-context is not activated based on the profile but the opposite is true: the specific profile is triggered based on the current context (selected by kubectl config use-context).
Although we can overwrite default settings from our skaffold.yaml config file by patching (compare related answer), it's not possible to overwrite the current-context based on slected profile e.g. manually as in your command:
skaffold -p prod
Here you are manually selecting specific profile. This way you bypass automatic profile triggering. As the documentation says:
Activations in skaffold.yaml: You can auto-activate a profile based on
kubecontext (could be either a string or a regexp: prefixing with ! will negate the match)
environment variable value
skaffold command (dev/run/build/deploy)
Let's say we want to activate our profile based on current kube-context only to make it simple however we can join different conditions together by AND and OR like in the example here.
solution
I want to make sure that if I run skaffold -p prod skaffold will fail
if my kubecontext points to a cluster other than my production
cluster.
I'm affraid it cannot be done this way. If you've already manually selected prod profile by -p prod you're bypassing selection of profile based on current context therefore you already chosen what can be done no matter how where it can be done is set (currently selected kube-context). In this situation skaffold doesn't have any mechanisms that would prevent you from running something on wrong cluster. In other words you're forcing this way certain behaviour of your pipeline. You already agree to it by selecting the profile. If you gave up using -p or --profile flags, certain profiles will never be triggerd unless currently selected kube-context does it automatically. skaffold just won't let that happen.
Let's look at the following example showing how to make it work:
apiVersion: skaffold/v2alpha3
kind: Config
metadata:
name: getting-started
build:
artifacts:
- image: skaffold-example
docker:
dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
cluster:
deploy:
kubectl:
manifests:
- k8s-pod.yaml
flags:
global: # additional flags passed on every command.
- --namespace=default
kubeContext: minikube
profiles:
- name: prod
patches:
- op: replace
path: /build/artifacts/0/docker/dockerfile
value: Dockerfile
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=prod
activation:
- kubeContext: minikube
command: run
- kubeContext: minikube
command: dev
In general part of our skaffold.yaml config we configured:
dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
Untill we name our Dockerfile - "NonExistingDockerfile" every pipeline will fail at its build stage. So by default all builds, no matter what kube-context is selected are destined to fail. Hovewer we can override this default behaviour by patching specific fragment of the skaffold.yaml in our profile section and setting again Dockerfile to its standard name. This way every:
skaffold run
or
skaffold dev
command will succeed only if the current kube-context is set to minikube. Otherwise it will fail.
We can check it with:
skaffold run --render-only
previously setting our current kube-context to the one that matches what is present in the activation section of our profile definition.
I've found a couple times now that I accidentally run commands like
skaffold run -p local -n skeleton when my current kubernetes context
is pointing to docker-desktop. I'd like to prevent myself and other
people on my team from committing the same mistake.
I understand your point that it would be nice to have some built-in mechanism that prevents overriding this automatic profile activation configured in skaffold.yaml by command line options, but it looks like currently it isn't possible. If you don't specify -p local, skaffold will always choose the correct profile based on the current context. Well, it looks like good material for feature request.
I was able to lock down the kubeContext for Skaffold both ways with:
skaffold dev --profile="dev-cluster-2" --kube-context="dev-cluster-2"
I also set in skaffold.yaml:
profiles:
- name: dev-cluster-2
activation:
- kubeContext: dev-cluster-2
deploy:
kubeContext: dev-cluster-2
It seems that using this combination is telling skaffold explicitly enough to not use the currentContext of $KUBECONFIG. With this combination, if --kube-context is missing from the cli parameters, the activation step in skaffold.yaml will trigger an error message if currentContext in $KUBECONFIG differs from the expected kubeContext of the activated Skaffold profile.
Hope this helps fellow developers who feel the pain when skaffold randomly switches the current kubernetes cluster, if the currentContext in $KUBECONFIG is changed as a side-effect from eg. another terminal window.

Helm on Minikube: update local image

I have Minikube (v1.1.0) running locally with Helm (v2.13.1) initialized and connected the local docker daemon with Minikube running eval $(minikube docker-env). In the code base of my application I created a chart with helm create chart. The first few lines of ./chart/values.yml I changed to:
image:
repository: app-development
tag: latest
pullPolicy: Never
I build the image locally and install/upgrade the chart with Helm:
docker build . -t app-development
helm upgrade --install example ./chart
Now, this works perfect the first time, but if I make changes to the application I would like to run the above two commands to upgrade the image. Is there any way to get this working?
workaround
To get the expected behaviour I can delete the chart from Minikube and install it again:
docker build . -t app-development
helm del --purge example
helm install example ./chart
When you make a change like this, Kubernetes is looking for some change in the Deployment object. If it sees that you want 1 Pod running app-development:latest, and it already has 1 Pod running an image named app-development:latest, then it's in the right state and it doesn't need to do anything (even if the local image that has that tag has changed).
The canonical advice here is to never use the :latest tag with Kubernetes. Every time you build an image, use a distinct tag (a time stamp or the current source control commit ID are easy unique things). With Helm it's easy enough to inject this based on a value you pass in:
image: app-development:{{ .Values.tag | default "latest" }}
This sort of build sequence would look a little more like
TAG=$(date +%Y%m%d-%H%m%S)
docker build -t "app-development:$TAG" .
helm upgrade --install --set "tag=$TAG"
If you're actively developing your component you may find it easier to try to separate out "hacking on code" from "deploying into Kubernetes" as much as you can. Some amount of this tends to be inevitable, but Kubernetes really isn't designed to be a live development environment.
One way you could solve this problem is using minikube and cloud code from google. When you initialize cloud code in your project, it creates skaffold yaml at root location. You can put helm chart for same project in the same code base. Go ahead and edit this configuration to match folder location for the helm chart:
deploy: helm:
releases:
- name: <chart_name>
chartPath: <folder path relative to this file>
now when you click on cloud code at the bottom of your visual code editor (or any editor), it should give you following options:
[1]: https://i.stack.imgur.com/vXK4U.png
Select "Run on Kubernetes" from the list.
Only changes you'll have to do in your helm chart is to read image url from Skaffold yaml using profile.
profiles:
- name: prod
deploy:
helm:
releases:
- name: <helm_chart_name>
chartPath: helm
skipBuildDependencies: true
artifactOverrides:
image: <url_production_image_url>
This will read image from configured url whereas in local, it should read from docker daemon. Cloud code also provide hot update / deployment when you make any changes to any file though.No need to always mention image tag while testing it locally. Once you're good with the code, update the image with latest version number which should trigger deployment in your integration / dev environment.