I'm executing some experiments on a Kubeflow cluster and I was wondering if there is a faster way than using the Kubeflow UI to set up the run input parameters.
I would like to connect from command line to the Kubefow cluster and run executions from there but i cannot find any documentation.
Thanks
Kubeflow pipelines has a command line tool called kfp, so for example you can use kfp run submit to start a run.
Related
I am trying to run multiple kubectl commands using Kubernetes#1 task in Azure Devops Pipeline, however I am not sure how to do this.
kubectl exec $(kubectl get pods -l app=deployment_label -o custom-columns=:metadata.name --namespace=some_name_space) --namespace=some_namespace -- some command
If what you want is input these multiple commands into Command parameter of the task:
Unfortunately to say, No, the task script does not support this compiled method until now.
As the doc described:
The command input accept only one of these commands, which means you can only input one command in each Kubernetes#1 task.
Also, if you want to input instead of select one of commands from it, it could not exceed the range of commands allowed by this task and has restrict writing format like this:
For the commands you provided, if continue want to use Kubernetes#1 task, you'd better split these commands into the separate one with multiple tasks. You could check this blog for detailed usage.
As work around, if you still want to execute this multiple commands at same time, you can use Azure CLI task(if you are connecting Azure K8s), or use Command line task(if what you are connecting is the local k8s server).
How does one instruct Pulumi to execute one or more commands on a remote host?
The equivalent Terraform command is remote-exec.
Pulumi currently doesn't support remote-exec-like provisioners but they are on the roadmap (see https://github.com/pulumi/pulumi/issues/1691).
For now, I'd recommend using the cloud-init userdata functionality of the various providers as in this AWS EC2 example.
Pulumi supports this with the Command package as of 2021-12-31: https://github.com/pulumi/pulumi/issues/99#issuecomment-1003445058
I am a beginner in kubernetes and have just started playing around with it. I have a use case to run commands that I take in through a UI(can be a server running inside or outside the cluster) in the kubernetes cluster. Let's say the commands are python scripts like HelloWorld.py etc. When I enter the command the server should launch a container which runs the command and exits. How do I go about this in Kubernetes? What should the scheduler look like?
You can try the training classes on katacoda.com as follows.
https://learn.openshift.com/
https://www.katacoda.com/courses/kubernetes
It's interactive handson, so it's interesting and easy to make sense of OpenShift and Kubernetes.
I hope it help you. ;)
I have managed to use this command on my hdinsight when I connect via ssh using azure cli, however I want to create an azure powershell scrip that will run the following command but I can't figure out how. I have tried searching for it online but can't find anything.
sudo -HE /usr/bin/anaconda/bin/conda install pandas
In this documentation see a section titled "Apply a script action to a running cluster from Azure PowerShell". You will need to take your script and put it in blob storage and then have the cluster execute that script ok each node using an HDInsight script action. The nice thing about script actions is that when they do cluster maintenance patching the underlying servers and need to take down a node and bring up a new node (or if you scale the cluster) then it will run the script action on any new nodes.
I have two Snakemake workflows that are very similar. Both of them share a sub-workflow and a couple of includes. Both of them work when doing dry runs. Both of them use the same cluser config file, and I'm running them with the same launch command. One of them fails when submitting to the LSF cluster with this error:
Executing subworkflow wf_common.
WorkflowError:
Config file __default__ not found.
I'm wondering whether it's "legal" in Snakemake for two workflows to share a sub-workflow, like in this case, and if not, whether the fact that I ran the workflow that does work first could have this effect.
Can you try Snakemake 3.12.0? It fixed a bug with passing the cluster config to a subworkflow. I would think that this solves your problem.