weblogic.management.ManagementException while deploying - deployment

Weblogic 12c deployment from Jenkins randomly getting failed with below exception.
weblogic.management.ManagementException: There is the same running
task 11 for Application#1.5.1 at
weblogic.utils.StackTraceDisabled.unknownMethod()
There is no other deployment task is running parallel and weblogic admin console is not holding any locks at that time as well.

Related

How to setup High available active-passive postgresql which has scheduled job running on server

I am new to postgresql, I went through the Postgres documentation which provides details for the active passive server setup.
What I want is the scheduler in spring boot should run only on server which is pointing to active mode, and in passive mode the schedular should not run.
As soon as active goes down and passive takes over as new active, then jobs should start running on new server.
I just have 2 jobs , one runs each 5 mins and one runs everyday at 1AM.
Need help in achieving this

Cannot run ASP.NET Core Web API on Azure Devops deployment group (self-hosted)

Im working on a simple deployment pipeline with azure devops. I created a deployment pipeline running on a self hosted ubuntu deployment group.
The pipeline looks like this:
Download artifacts from CI pipeline (created with dotnet publish)
Stop running deployment
Unzip the ASP.NET Core Web API to the deployment directory
Run new deployment with dotnet MyApp.dll
The first two steps work as expected. However, when the dotnet My App.dll command is run, the process runs for 10 seconds with following "error" message being printed at the end:
The STDIO streams did not close within 10 seconds of the exit event from process '/usr/bin/bash'. This may indicate a child process inherited the STDIO streams and has not yet exited.
The deployment task is successful despite the message and the app not running. I tried to work around this feature by using nohup & and relocating the command output. After some research I found that all processes started by a pipeline agent are stopped after the agent's work is done - meaning this behaviour is intended and my understanding of azure deployments/agents is wrong.
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
You are already on the right way.
All the process launched in the pipeline will be finished/clean up in “Finalize Job” step when the pipeline is over.
If you don't want the process to be closed, please try set variable Process.clean= false to stops the "finalize job" step from killing all processes.
But when you create a new pipeline next time, you need to close the app before starting it.

Build is succeed but Agent is still running

Configured self-hosted Agent skipping sslcertvalidation as we were facing issue .
Even though build is succeed but Agent is still running ...( In logs it's showing build is successd)
If I check the logs from Agent Machine it's showing "try to append 1 batches web console line for record '96defxxxxxxxxxxxx' , success rate : 1/1
Your self-hosted build agent is always running otherwise you would not be able to contact it from Azure DevOps to trigger builds.
Unless you see any errors or warnings in Azure DevOps you are fine.

Airflow scheduler fails to start tasks

My problem:
Airflow scheduler is not assigning tasks.
Background:
I have Airflow running successfully on my local machine with sqlitedb. The sample dags as well as my custom DAGs ran without any issues.
When I try to migrate from sqlite database to Postgres (using this guide), the scheduler no longer seems to be assigning tasks. The DAG get stuck on "running" state but no task in any DAGs ever gets assigned a state.
Troubleshooting steps I've taken
The web server and the scheduler are running
The DAG is set to "ON".
After running airflow initdb, the public schema is populated with all of the airflow tables.
The user in my connection string owns the database as well as every table in the public schema.
Scheduler Log
The scheduler log keeps posting out this WARNING but I have not been able to use it to find any useful information aside form this other post with no responses.
[2020-04-08 09:39:17,907] {dag_processing.py:556} INFO - Launched DagFileProcessorManager with pid: 44144
[2020-04-08 09:39:17,916] {settings.py:54} INFO - Configured default timezone <Timezone [UTC]>
[2020-04-08 09:39:17,927] {settings.py:253} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=44144
[2020-04-08 09:39:19,914] {dag_processing.py:663} WARNING - DagFileProcessorManager (PID=44144) exited with exit code -11 - re-launching
Environment
PostgreSQL version 12.1
Airflow v1.10.9
This is all running on a MacBook Pro (Catalina) in a conda virtual environment.
Postgres was installed using postgresapp. Updated postgresapp to version 2.3.3e. PostgresSQL is still version 12.1 but by updating the postgresapp, the issue was solved.

'Fail on Standard Error' option ignored during deployment

I've recently had an issue using Azure DevOps pipelines when deploying a release.
I'm using a set of Tasks to take a VM snapshot, install the application, start services to multiple websphere application servers in parallel and then delete the VM snaps if release is successful.
The 'delete snapshot' step is marked to run 'only when all previous jobs have succeeded' and the application install step has the 'Fail on Standard Error' checked in the Task's Advanced option.
The release deployed successfully to 1 server and failed on another because the service didn't start. both the checks were then ignored and the snapshot was deleted.
How can I get the Pipeline to fail when any one node has failed, instead of all of them?