Rundeck service starts and goes to dead state after a while - haproxy

Rundeck service starts and goes to dead state after a while
Below is the output.
02:43:11 # rpm -qa | grep rundeck rundeck-config-2.6.9-1.21.GA.noarch rundeck-2.6.9-1.21.GA.noarch
02:43:59 # service rundeckd start Starting rundeckd: [ OK ]
02:44:07 # service rundeckd status rundeckd (pid 31637) is running...
02:44:48 # service rundeckd status rundeckd dead but pid file exists
02:44:14 # java -version openjdk version "1.8.0_262" OpenJDK Runtime Environment (build 1.8.0_262-b10) OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)

Checking your original post here. It's a system network problem (java.net.BindException: Address already in use), another process is using your Rundeck TCP port, that's the reason why the Rundeck process dead on startup. You can identify "the another" process with lsof -i :4440 or reconfigure Rundeck to listen to another TCP port.
EDIT: Jabraj found the solution: downgrade to JDK 1.7.

I have faced the same problem and i resolved it by changing ownership to the rundeck related folders.
Remove the /var/run/rundeck.pid file in order to delete the zombie process.
Check for any other zombie process using lsof commmand.
Reownership the rundeck related folders (Owner should be rundeck)
Restart the rundeckd service.
Hurray!!! its running inside the container well.
4 steps to resolve rundeck issue

Related

what is this file hs_err_pid6392.log? [duplicate]

I have lots of log file in my home directory:
hs_err_pid2326.log
hs_err_pid2416.log
I believe it is a java error log file, how to remove it and stop creating them?
Java version:
[kelvin#localhost ~]$ java -version
java version "1.6.0_21"
Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
Java HotSpot(TM) Server VM (build 17.0-b16, mixed mode
They are created if and when the JVM crashes; they're analogous to a core file, but contain a lot of Java-specific information. They're just text files, and you can delete them like you would any other files:
$ rm ~/hs_err_pid*.log
You can stop creating them by no longer crashing the JVM. Normally, such crashes are rare. Look at the files themselves in a text editor and they will contain some info about their origins.
These are Java crash (core) dump log files. Identify which Java process creates them by tracing and monitoring the PID.
Have a shutdown script to delete these files.
With systemd, create a script deleting those files, e.g. /home/user/cleanJavaLogs.bash:
rm /home/user/hs_err_pid*
Then create a service descriptor file /etc/systemd/system/clearJavaLogs.service containing:
[Unit]
Description=Clear Javascript "hs_err_pidnnnn.log" files on shutdown
DefaultDependencies=no
Before=shutdown.target reboot.target halt.target
[Service]
Type=oneshot
ExecStart=/home/user/cleanJavaLogs.bash
[Install]
WantedBy=halt.target reboot.target shutdown.target
and activate the service:
systemctl enable clearJavaLogs.service

Cannot complete pgadmin4 setup. Apache web server

I've got problem with completing pgadmin4 installation thru sudo /usr/pgadmin4/bin/setup-web.sh command.
During this process instalator does not recognizing that Apache is running and asks me if I want to start it:
The Apache web server is not running. We can enable and start the web server for you to finish pgAdmin 4 installation. Continue (y/n)? y
Then it just spits some errors:
Too few arguments.
Error enabling . Please check the systemd logs
Too few arguments.
Error starting . Please check the systemd logs
So far I havn't found where the logs are stored.
About my apache, I am quite sure that my server is running, because I can connect to it through browser, phpmyadmin is working properly, and service apache2 status returns * apache2 is running. By my understanding apache2 is just fancy word for httpd service, and there is no other service called simply apache.
PostgreSQL seems to work properly from command line, haven't tested if I can connect to it yet, but this shouldn't be the case right?
I am using
**PostgreSQL:** 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1)
**Ubuntu:** Ubuntu 20.04 LTS
**Server:** Apache/2.4.41 (Ubuntu)
I had the same issue for Debian 10 and Ubuntu 20. The /usr/pgadmin4/bin/setup-web.sh script is using 'uname -a' which doesn't contain "Debian" identifier in the return string. Updating this to read /proc/version will allow APACHE to be specified as the Debian variant of apache2.
Change:
UNAME=$(uname -a)
To:
UNAME=$(cat /proc/version)
I had a similar problem with Ubuntu running inside WSL 2. Managed to resolve it by modifying the /usr/pgadmin4/bin/setup-web.sh script. I moved these lines outside of the conditional:
IS_DEBIAN=1
APACHE=apache2
This allowed the installation to progress beyond the Too few arguments. error. There was still an error however:
System has not been booted with systemd as init system (PID 1). Can't operate.
Error restarting apache2. Please check the systemd logs
I resolved this by running:
sudo service apache2 restart
After this I tried bringing up the admin page by visiting http://127.0.0.1/pgadmin4 from the Windows host. This still didn't work, and had to connect using the Ubuntu machine's ip address (you can find it out via ifconfig) which finally allowed me to see the login page.

Docker Machine error: Hyper-V PowerShell Module is not available

I've checked my Hyper-V settings and PowerShell Module is enabled. I've also found this documented issue: https://github.com/docker/machine/issues/4342 but it is not the same issue since I do not have VMware PowerCLI installed. The issue was closed with a push to the repo and is supposedly fixed in 0.14.0-rc1, build e918c74 so I tried it anyways. After replacing my docker-machine.exe, I'm still getting the error and still getting the error even if I reinstall Docker for Windows.
For some more background, this error starting happening after a reinstall because my Docker install had an error: https://github.com/docker/for-win/issues/1691, however, I'm not longer getting that issue after reinstalling.
For those who struggle with this issue in Windows, Follow the instruction here
When creating a Hyper-v VM using docker-machine on win10, an error was returned"Error with pre-create check: "Hyper-V PowerShell Module is not available"。
The solution is very simple. The reason is the version of the docker-machine program. Replace it with v0.13.0. The detailed operation is as follows:
Download the 0.13.0 version of the docker-machine command. Click to download: 32-bit system or 64-bit system
After the download is complete, rename and replace the " docker-machine.exe " file in the " C:\Program Files\Docker\Docker\resources\bin" directory. It is best to back up the original file.
Here is the solution
https://github.com/docker/machine/releases/download/v0.15.0/docker-machine-Windows-x86_64.exe
Save the downloaded file to your existing directory containing docker-machine.exe.
For my system this is the location for docker-machine.exe
/c/Program Files/Docker/Docker/Resources/bin/docker-machine.exe
Backup the old file and replace it file with the new one.
cp docker-machine.exe docker-machine.014.exe
Rename the downloaded filename to docker-machine.exe
mv docker-machine-Windows-x86_64.exe docker-machine.exe
Build Instructions
Create virtual switch in Hyper-V manager named myswitch
Request Docker to create a VM named myvm1
docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1
Results
docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1
Running pre-create checks...
(myvm1) Image cache directory does not exist, creating it at C:\Users\Trey Brister\.docker\machine\cache...
(myvm1) No default Boot2Docker ISO found locally, downloading the latest release...
(myvm1) Latest release for github.com/boot2docker/boot2docker is v18.05.0-ce
(myvm1) Downloading C:\Users\Trey Brister\.docker\machine\cache\boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v18.05.0-ce/boot2docker.iso...
(myvm1) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(myvm1) Copying C:\Users\Trey Brister\.docker\machine\cache\boot2docker.iso to C:\Users\Trey Brister\.docker\machine\machines\myvm1\boot2docker.iso...
(myvm1) Creating SSH key...
(myvm1) Creating VM...
(myvm1) Using switch "myswitch"
(myvm1) Creating VHD
(myvm1) Starting VM...
(myvm1) Waiting for host to start...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe env myvm1
(1),
V0.15 fixed this issue officially:
Fix issue #4424 - Pre-create check: "Hyper-V PowerShell Module is not available"
Official Introduction:
https://github.com/docker/machine/pull/4426
Address to donload V0.15
https://github.com/docker/machine/releases
(2),
I tested this, it works fine.
No need restart docker
It take effect immdiately after the "docker-machine.exe" is replaced with version 0.15
(3),
Backup the original one is a good habit
Just start docker desktop if you are on Windows

Rundeck Status shows "dead" despite rundeck being up and running

The rundeck instance is up and running but when I execute the following command it shows:
$/etc/init.d/rundeckd status
Status rundeckd: rundeckd is running (pid=37296, port=4440) dead
Kindly help me out here on how to get this glitch fixed/?
Thanks !
You can run /etc/init.d/rundeckd stop and it should clear out the old pid file.
You can check for java version (exact with subversion) in /app/rundeck/etc/Profile and java installed on the particular server where rundeck is installed. Both should be identical, if not update the java version from the server to the file/app/rundeck/etc/Profile. Then restart rundeck, this should fix the issue .

What is veewee waiting for when it's waiting for ssh login?

When veewee is displaying the following message, Waiting for ssh login on 127.0.0.1 with user veewee to sshd on port => 7222 to work, timeout=10000 sec what exactly is it waiting on?
As far as I can tell there is a ssh server on port 7222 on the host that veewee has put up and it's waiting on that. This means that something in the guest is going to connect back to it. However, I can't figure out what that thing might be - and thus I can't debug further.
Further details
I'm trying to build a virtualbox image for vagrant with the CentOS-6.3-x86_64-minimal template. My steps:
bundle exec veewee vbox define 'ejs-centos6.3-1' 'CentOS-6.3-x86_64-minimal'
wget http://mirror.symnds.com/distributions/CentOS-vault/6.3/isos/x86_64/CentOS-6.3-x86_64-minimal.iso
bundle exec veewee vbox build 'ejs-centos6.3-1'
The CentOS install appeared to run without error but it's stuck waiting for the ssh login.
You're right, there's a Ssh server on listening on port 7222, but it's on the guest (VM), not the host.
The host (Veewee) is waiting to connect to it. This SSH service is supposed to become available when the VM install process finishes, that's one of the steps used by Veewee to assume that the setup went fine and that the VM is ready.
If Veewee blocks and never gets this SSH connection, I think there could be multiple reasons:
VM setup went wrong and something prevents it from finishing successfully. Check Veewee output and the Virtualbox VM graphical console that should have opened when launching vewee box build.
There's something preventing your host computer to connect to the VM at the network level.
The VM image doesn't have Sshd installed, and/or the veewee box configuration files (in veewee/definitions/ejs-centos6.3-1/) miss instructions to install the ssh package
You should try to login to the VM using Virtuabox console window and check if there's an ssh package installed (rpm -qa | grep openssh-server) and a process named sshd running.
I've run Veewee against Centos 7 built with GUI on and it stuck on anaconda asking for source of packages. I've checked the ks.cfg and it was pointing to dead resource (404). After pointing to valid url it went through.