Vercel: Running a Node script after deployment? - deployment

Is there a post-deploy hook or any other way to run a Node script after deployment on Vercel?

Related

Issues running kubectl from Jenkins

I have deployed jenkins on the kubernetes cluster using helmchart by following this:
https://octopus.com/blog/jenkins-helm-install-guide
I have the pods and services running in the cluster. I was trying to create a pipeline to run some kubectl commands. It is failing with below error:
java.io.IOException: error=2, No such file or directory
Caused: java.io.IOException: Cannot run program "kubectl": error=2, No such file or directory
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)
I thought that it has something to do with the Kubernetes CLI plugin for jenkins and raised an issue here:
https://github.com/jenkinsci/kubernetes-cli-plugin/issues/108
I have been advised to install kubectl inside jenkins pod.
I have the jenkins pod already running (deployed using helmchart). I have been seeing options to include the kubectl image binary as part of the dockerfile. But, I have used the helmcharts and not sure if I have to luxury to edit and deploy the pod to add the kubectl.
Can you please help with your inputs to resolve this? IS there any steps/documentation that explain how to install kubectl on the running pod? Really appreciate your inputs as this issue stopped one of my critical projects. Thanks in advance.
I tried setting the rolebinding for the jenkins service account as mentioned here:
Kubernetes commands are not running inside the Jenkins container
I haven't installed kubectl inside pod yet. Please help.
Jenkins pipeline:
kubeconfig(credentialsId: 'kube-config', serverUrl: '')
sh 'kubectl get all --all-namespaces'
(attached the pod/service details for jenkins)enter image description here

Run kubectl command after CronJob is scheduled but before it runs

I have multiple CronJobs scheduled to run on some days. I deploy them with Helm charts, and I need to test them - therefore, I'm looking for a solution to run cronjob once right after it gets deployed with Helm.
I have already tried using Helm post-hooks to create a job from CronJob
kubectl create job --from=cronjob/mycronjobname mycronjob-run-0
but for this I have to create a separate container with kubectl docker image, which is not a good option for me. (Another time, it waited till CronJob is executed to run the Job, maybe it was my mistake of some sort)
I also tried creating a separate Job to execute this command, but Helm deploys Job and only then CronJob, so it's also not an option.
I also tried having a postStart lifecycle in the CronJob container, however it also waits for the CronJob to be executed according to it's schedule.
Is there any way to do this?

How to make sure that the script is run from only one of the PODs of all replicas?

Our product is running on Kubernetes/Docker.
I have a POD which will be running multiple replicas. There is a script in the POD which will be run after POD start. The script needs the main process in the POD to be running.
The problem is the script should be run only from one POD. With multiple replicas the script will be run multiple times.
How to make sure that the script is run from only one of the PODs of all replicas?
Thanks
Chandra
It's not possible to run script from only one replica of a pod. Use cronjob for such usecase.
If using helm, helm post install hook can be used.

Use Kubernetes pod to run script on host node

I have a Daemonset that places a pod onto all of my cluster's nodes. That pod looks for a set of conditions. When they are found it is supposed to execute a bash script on its node.
Currently my pod that I apply as a daemon set mounts the directory with the bash script. I am able to detect the conditions that I am looking for. When the conditions are detected I execute the bash script but it ends up running in my alpine container inside my pod and not on the host node.
As as simple example of what is not working for me (in spec):
command: ["/bin/sh"]
args: ["-c", "source /mounted_dir/my_node_script.sh"]
I want to execute the bash script on the NODE the pod is running on, not within the container/pod. How can this be accomplished?
Actually a command run inside a pod is run on the host. It's a container (Docker), not a virtual machine.
If your actual problem is that you want to do something, which a normal container isn't allowed to, you can run a pod in privileged mode or configure whatever you exactly need.

Service Fabric doesn't run a docker pull on deployment

I've setup VSTS to deploy an Service Fabric app with a Docker guest container. All goes well but Service Fabric doesn't download the latest version of my image, a docker pull doesn't seem to be performed.
I've added the 'Service Fabric PowerShell script' with a 'docker pull' command but this is then only run on one of the nodes.
Is there a way to run a powershell script/command during deployment, either in VSTS or Service Fabric, to run a command across all the nodes to do a docker pull?
Please use an explicit version tag. Don't rely on 'latest'. An easy way to do this in VSTS, in the task 'Push Services' add $(Build.BuildId) in the field Additional Image Tags to tag your image.
Next, you can use a tokenizer to replace the ServiceManifest.xml image tag value in your release pipeline. One of my favorites is this one.
to deploy docker containers to Service Fabric, you have to either provide a Docker Compose file or a Service Fabric Applicaiton Package with manifests.
For containers the Service Fabric hosting system controls the docker host on the nodes to run containers.
For VSTS deployments, there's a Service Fabric Deploy task and a Service Fabric Compose Deploy task for both paths.
Container quick starts for Service Fabric:
See her for Windows: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers
Her for Linux: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers-linux