Kill a hung webistrano deployment - capistrano

I'm currently trying to set up deployment using Webistrano/Capistrano. Due to a permission error, my test deployment has not completed. However, because it failed before the deployment pid was created the cancel button does not appear in the Webistrano deployment interface.
I'm wondering how I can kill the deployment process and unlock the deployment stage (I assume there is a pid file or something somewhere on my system). Webistrano's running via Passenger through Apache on a Centos5 machine.
Any help would be appreciated.

To kill the hung deployment simply change the status of the deployment in the 'deployments' db table. You'll also have to unlock the deployment stage by updating the 'stages' table and set the 'locked' column to 0.

Related

How to check if symfony messenger is working

I have a pod running in kubernetes / aws cloud. Due to limited configuration options in a custom deployment process (not my fault!!) I cannot start the symfony messenger as you usually would start it. What I have to do after a deployment is log into the shell and manually do
bin/console messenger:consume my_kafka_messages
Of course as soon as the pod for any reason is automatically restarted my worker isn't running. So until we can change the company deployment process I have to make sure to at least get notice if the worker isn't running.
Is there any option to e.g. run an symfony command which checks if the worker is running? If that was possible I could let the system start the worker or at least send me a notification.
How about
bin/console debug:messenger
?
If I do that and get e.g. this output is this sign that the worker is running? Or is it just the configuration of a worker, which could run, if it were started and may or may not run currently?
$ bin/console deb:mess
Messenger
=========
events
------
The following messages can be dispatched:
--------------------------------------------------
#codeCoverageIgnore
App\Domain\KafkaEvents\ProductPictureCollection
handled by App\Handler\ProductPictureHandler
--------------------------------------------------
Of course I can do a crude approach and check the db, which logs the processed datasets. But t is always possible that for e.g. 5 days there are no data to process. In that case I would get false alarms although everything is fine.
So checking directly if the worker is running would be much better, but I have no idea how to do it.

Kubernetes cluster running Cronjob triggering only one pod

I was trying to find a solution how to run a job handled by 2 pods in a cluster.
The job is ran by the cronjob scheduler, to run every (say) 15 mins. This job is to fetch records from the db table and process it. There is only READ permission provided to access the table records. I am trying to see, is there any way to configure in k8s, that only one pod run the job.
This way I want to prevent the duplicate processing.
The alternate is have a temporary lock file in the persistent storage and the application in the pod puts a lock to it and releases after processing.
If there is any out of box solution available with in k8s, please let me know.
This is implemented using a traditional resource lock mechanism. A lock file is created during the process and the pods do no run if there is any lock file exists.
This way only one pods will run the job any point of time.

spring-batch job monitoring and restart

I am new to spring-batch, got few questions:-
I have got a question about the restart. As per documentation, the restart feature is enabled by default. What I am not clear is do I need to do any extra code for a restart? If so, I am thinking of adding a scheduled job that looks at failed processes and restarts them?
I understand spring-batch-admin is deprecated. However, we cannot use spring-cloud-data-flow right now. Is there any other alternative to monitor and restart jobs on demand?
The restart that you mention only means if a job is restartable or not .It doesn't mean Spring Batch will help you to restart the failed job automatically.
Instead, it provides the following building blocks for developers for achieving this task on their own :
JobExplorer to find out the id of the job execution that you want to restart
JobOperator to restart a job execution given a job execution id
Also , a restartable job can only be restarted if its status is FAILED. So if you want to restart a running job that was stop running because of the server breakdown , you have to first find out this running job and update its job execution status and all of its task execution status to FAILED first in order to restart it. (See this for more information). One of the solution is to implement a SmartLifecycle which use the above building blocks to achieve this goal.

I'm having issues with DevOps production deployment - Unable to edit or replace deployment

Up until yesterday morning I was able to deploy data factory v2 changes in my release pipeline. Then last night during deployment I received an error that the connection was forced closed. Now when I try to deploy to the production environment, I get this error: "Unable to edit or replace deployment 'ArmTemplate_18': previous deployment from '12/10/2019 10:19:27 PM' is still active (expiration time is '12/17/2019 10:19:23 PM')". Am I supposed to wait a week for this error to clear itself?
This message indicates that there’s another deployment going on, with the same name, in the same ARM Resource Group. In order to perform your new deployment, you’ll need to either:
Wait for the existing deployment to complete
Stop the in-progress / active deployment
You can stop an active deployment by using the Stop-AzureRmResourceGroupDeployment PowerShell command or the azure group deployment stop command in the xPlat CLI tool. Please refer to this case.
Or you can open target Resource Group on the azure portal, go to Deployment tab, find not completed deployments, cancel it, start new deploy. You can refer to this issue for details.
In addition, there is a recently event of availability degradation of Azure DevOps .This could also have an impact. Now the engineers have mitigated this event.

Job/Task in Kubernetes and Spring Cloud Task

I created a Pod that have #EnableTaskLauncher with spring-cloud-deployer-kubernetes. It is receiving task requests through spring-cloud-stream and launching the tasks.
Everything is working perfectly except that I want the task to be launched as Kind: Job instead of Kind: Deployment .
I could not find any configuration or property in spring-cloud-deployer-kubernetes that do this or if it is available .
We moved away from the Jobs to Bare-pods model for Spring Cloud Task (in SCDF) to better control its lifecycle such as the clean shutdown of the container when the SCT-operation is complete.
However, there's spring-cloud/spring-cloud-deployer-kubernetes#163 that adds an option to choose between Jobs vs. Pods for Tasks. Please try it out and give us feedback on the PR.