MariaDB messages when stopping/starting any service - sockets

We have:
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
x86_64
Linux 4.19.0-17-amd64 #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64 GNU/Linux
mysql Ver 15.1 Distrib 10.6.4-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
When starting/stopping mariadb.service, the following message appears:
Failed to get properties: Unit name mariadb-extra#.socket is neither a valid invocation ID nor unit name.
Failed to get properties: Unit name mariadb#.socket is neither a valid invocation ID nor unit name.
The same messages pop up when starting/stopping any service.
What's wrong with mariadb user sockets? How can I remove these messages?

From MDEV-27715, the service implementation of Debian/Ubuntu is deficient in that it doesn't fully understand socket templated files of systemd.
Use systemctl.

Related

RedHat 8.4 Kickstart Error - iscsid: Warning: Initiatorname file

I am testing a known working Redhat Kickstart procedure on upgraded Vmware software.
Our KS process uses two CDRom's. the RH ISO is attached to CD 1 and KS ISO attached to CD 2.
the KS process fails with many messages such as
localhost dracut-initqueue[1118]: Warning: dracut-initqueue timeout - starting timeout scripts
In addition the following message appears in the rdsosreport file:

CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE when running guppy basecaller

I have tried to run the ONT basecaller guppy. I have run this code several times before without any issues. Now (following a reboot) it is producing the error message:
[guppy/error] main: CUDA error at /builds/ofan/ont_core_cpp/ont_core/common/cuda_common.cpp:203: CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE [guppy/warning] main: An error occurred in the basecaller. Aborting.
Is this a compatibility problem, and if so what can I do to solve it?
I'm using Ubuntu 18.04.4 LTS (GNU/Linux 5.4.0-72-generic x86_64)
and Guppy Basecalling Software, (C) Oxford Nanopore Technologies, Limited. Version 4.0.14+8d3226e, client-server API version 2.1.0
Here is my guppy code:
guppy_basecaller -i fast5/pass -r --device cuda:0 -s hac_fastqs_demul -c /opt/ont/ont-guppy/data/dna_r9.4.1_450bps_hac.cfg --num_callers 4 --require_barcodes_both_ends --trim_barcodes --detect_mid_strand_barcodes --barcode_kits "EXP-PBC001"
This issue was fixed by rebooting.

What version of openssl does .Q.hg require?

I am trying to use .Q.hg in kdb and I get the following error
q).Q.hg`$":https://www.google.com"
'conn. OS reports: Protocol not available
[0] .Q.hg`$":https://www.google.com"
^
q))
When I execute (-26!)[] I get the following output:
q).Q.hg`$":https://www.google.com"
'conn. OS reports: Protocol not available
[0] .Q.hg`$":https://www.google.com"
^
q))
I have downloaded different versions of openssl from the openssl website and built them from source, but nothing seems to work.
I have also downloaded the certificate as instructed on the kx website and defined the SSL_CA_CERT_FILE variables.
UPDATE:
output from (-26!)[]:
q))(-26!)[]
'Could not initialize openssl. Error was incompatible ssl version
[4] (-26!)[]
^
q))
Output from .z.k:
q)).z.K
3.6
q))
Distro version:
Static hostname:
Icon name: computer-laptop
Chassis: laptop
Machine ID:
Boot ID:
Operating System: Linux Mint 19.1
Kernel: Linux 4.15.0-20-generic
Architecture: x86-64
As per kx docs OpenSSL 1.1 is not supported, you need to use a 1.0.x version

During start of Docker I get this message: "getting the final child's pid from pipe caused "read init-p: connection reset by peer"

I have Docker installed under CentOS Linux 7.6.1810 and Plesk Onyx 17.8.11, and everything was fine. For a few hours I can't start mongoDB or Docker anymore.
I get this error message
{"message":"OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"}
What could it be?
I have fixed it, I downgraded containerd.io to the version 1.2.0 and Docker is running.
Docker-ce 18.09.2 + Linux Kernel 3.10.0 produces the same problem as you. If we want to use Docker-ce 18.09.2, Linux Kernel 4.x+ is required.

Fabric take long time with ssh

I am running fabric to automate deployment. It is painfully slow.
My local environment:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > uname -a
Darwin sh.local 12.4.0 Darwin Kernel Version 12.4.0: Wed May 1 17:57:12 PDT 2013; root:xnu-2050.24.15~1/RELEASE_X86_64 x86_64
My fab file:
#!/usr/bin/env python
import logging
import paramiko as ssh
from fabric.api import env, run
env.hosts = [ 'examplesite']
env.use_ssh_config = True
#env.forward_agent = True
logging.basicConfig(level=logging.INFO)
ssh.util.log_to_file('/tmp/paramiko.log')
def uptime():
run('uptime')
Here is the portion of the debug logs:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > date;fab -f /Users/bob/code/somenv/somenv/fabfile/pefabfile.py uptime
Sun Aug 11 22:25:03 EDT 2013
[examplesite] Executing task 'uptime'
[examplesite] run: uptime
DEB [20130811-22:25:23.610] thr=1 paramiko.transport: starting thread (client mode): 0x13e4650L
INF [20130811-22:25:23.630] thr=1 paramiko.transport: Connected (version 2.0, client OpenSSH_5.9p1)
DEB [20130811-22:25:23.641] thr=1 paramiko.transport: kex algos:['ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie-hellman-grou
It takes 20 seconds before paramiko is even starting the thread. Surely, Executing task 'uptime' does not take that long. I can manually log in through ssh, type in uptime, and exit in 5-6 seconds. I'd appreciate any help on how to extract mode debug information. I made the changes mentioned here, but no difference.
Try:
env.disable_known_hosts = True
See:
https://github.com/paramiko/paramiko/pull/192
&
Slow public key authentication with paramiko
Maybe it is a problem with DNS resolution and/or IPv6.
A few things you can try:
replacing the server name by its IP address in env.hosts
disabling IPv6
use another DNS server (e.g. OpenDNS)
For anyone looking at this post-2014, paramiko, which was the slow component when checking known hosts, introduced a fix in March 2014 (v1.13), which was allowed as requirement by Fabric in v1.9.0, and backported to v1.8.4 and v1.7.4.
So, upgrade !