I am trying to run multiple kubectl commands using Kubernetes#1 task in Azure Devops Pipeline, however I am not sure how to do this.
kubectl exec $(kubectl get pods -l app=deployment_label -o custom-columns=:metadata.name --namespace=some_name_space) --namespace=some_namespace -- some command
If what you want is input these multiple commands into Command parameter of the task:
Unfortunately to say, No, the task script does not support this compiled method until now.
As the doc described:
The command input accept only one of these commands, which means you can only input one command in each Kubernetes#1 task.
Also, if you want to input instead of select one of commands from it, it could not exceed the range of commands allowed by this task and has restrict writing format like this:
For the commands you provided, if continue want to use Kubernetes#1 task, you'd better split these commands into the separate one with multiple tasks. You could check this blog for detailed usage.
As work around, if you still want to execute this multiple commands at same time, you can use Azure CLI task(if you are connecting Azure K8s), or use Command line task(if what you are connecting is the local k8s server).
Related
I'm executing some experiments on a Kubeflow cluster and I was wondering if there is a faster way than using the Kubeflow UI to set up the run input parameters.
I would like to connect from command line to the Kubefow cluster and run executions from there but i cannot find any documentation.
Thanks
Kubeflow pipelines has a command line tool called kfp, so for example you can use kfp run submit to start a run.
In this release pipeline, I have two tasks: one is running the kubectl command, and I need it to keep running while I run the second task. After researching for a while, I know that parallel tasks are not available in Azure DevOps, so I tried with multiple agents. However I could not make it work.
May I know which part am I missing?
My current config looks like this:
And in each of the agents, I selected "Multi-Agent" on parallelism with number of 2.
But it seems not the one I want.
What I want is, run the first job with kubectl port-forward command. And keep it running while second job start running. After second job Run script is finished, then the first job can end.
May I know in Azure DevOps is there a way to achieve this?
Thank you so much.
The easiest would be actually to use seprate stages. But if you want to use single stage you can do it as follows:
Define variable like this:
Configure parallelism on the job:
And then define custom condition on the tasks:
One task should have eq(variables['Script'], 'one') and the other eq(variables['Script'], 'two')
You will get two agents runs your jobs but in each job will actually do only one task:
We're building out a release pipeline in Azure DevOps which pushes to a Kubernetes cluster. The first step in the pipeline is to run an Azure CLI script which sets up all the resources - this is an idempotent script so we can run it each time we run the release pipeline. Our intention is to have a standardised release pipeline which we can run against several clusters, existing and new.
The final step in the pipeline is to run the Kubectl task with the apply command.
However, this pipeline task requires specifying in advance (at the time of building the pipeline) the names of the resource group and cluster against which it should be executed. But the point of the idempotent script in the first step is to ensure that the resources and to create if not.
So there's the possibility that neither the resource group nor the cluster will exist before the pipeline is run.
How can I achieve this in a DevOps pipeline if the Kubectl task requires a resource group and a cluster to be specified at design time?
This Kubectl task works with service connection type: Azure Resource Manager. And it requires to select Resource group field and Kubernetes cluster field after you select the Azure subscription, as below.
After testing, we find that these 2 fields supports variable. Thus you could use variable in these 2 fields, and using PowerShell task to set variable value before this Kubectl task. See: Set variables in scripts for details.
I'm running Terraform script from Rundeck. When I run terraform plan, complete output should go to slack. If everything is fine, I need to approve it in slack. Then it should run terraform apply.
You can design a job that executes the terraform plan (step) with slack notification, and using this app call another job that executes terraform apply (as the same way of the first job). At the moment of call your terraform scripts, maybe a good idea is to use -auto-approve argument to avoid the interactive behavior on Rundeck, another alternative is using expect to execute your terraform scripts.
I'm looking to fully understand the jobs in kubernetes.
I have successfully create and executed a job, but I do not see the use case.
Not being able to rerun a job or not being able to actively listen to it completion makes me think it is a bit difficult to manage.
Anyone using them? Which is the use case?
Thank you.
A job retries pods until they complete, so that you can tolerate errors that cause pods to be deleted.
If you want to run a job repeatedly and periodically, you can use CronJob alpha or cronetes.
Some Helm Charts use Jobs to run install, setup, or test commands on clusters, as part of installing services. (Example).
If you save the YAML for the job then you can re-run it by deleting the old job an creating it again, or by editing the YAML to change the name (or use e.g. sed in a script).
You can watch a job's status with this command:
kubectl get jobs myjob -w
The -w option watches for changes. You are looking for the SUCCESSFUL column to show 1.
Here is a shell command loop to wait for job completion (e.g. in a script):
until kubectl get jobs myjob -o jsonpath='{.status.conditions[?(#.type=="Complete")].status}' | grep True ; do sleep 1 ; done
One of the use case can be to take a backup of a DB. But as already mentioned that are some overheads to run a job e.g. When a Job completes the Pods are not deleted . so you need to manually delete the job(which will also delete the pods created by job). so recommended option will be to use Cron instead of Jobs