During start of Docker I get this message: "getting the final child's pid from pipe caused "read init-p: connection reset by peer" - mongodb

I have Docker installed under CentOS Linux 7.6.1810 and Plesk Onyx 17.8.11, and everything was fine. For a few hours I can't start mongoDB or Docker anymore.
I get this error message
{"message":"OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"}
What could it be?

I have fixed it, I downgraded containerd.io to the version 1.2.0 and Docker is running.

Docker-ce 18.09.2 + Linux Kernel 3.10.0 produces the same problem as you. If we want to use Docker-ce 18.09.2, Linux Kernel 4.x+ is required.

Related

CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE when running guppy basecaller

I have tried to run the ONT basecaller guppy. I have run this code several times before without any issues. Now (following a reboot) it is producing the error message:
[guppy/error] main: CUDA error at /builds/ofan/ont_core_cpp/ont_core/common/cuda_common.cpp:203: CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE [guppy/warning] main: An error occurred in the basecaller. Aborting.
Is this a compatibility problem, and if so what can I do to solve it?
I'm using Ubuntu 18.04.4 LTS (GNU/Linux 5.4.0-72-generic x86_64)
and Guppy Basecalling Software, (C) Oxford Nanopore Technologies, Limited. Version 4.0.14+8d3226e, client-server API version 2.1.0
Here is my guppy code:
guppy_basecaller -i fast5/pass -r --device cuda:0 -s hac_fastqs_demul -c /opt/ont/ont-guppy/data/dna_r9.4.1_450bps_hac.cfg --num_callers 4 --require_barcodes_both_ends --trim_barcodes --detect_mid_strand_barcodes --barcode_kits "EXP-PBC001"
This issue was fixed by rebooting.

How to debug the problem not able to translate OID with a new MIB file for UPS-MIB?

On Centos, I ran into the following error:
sudo snmptrap -v 2c -c read localhost '' UPS-MIB::upsTraps
MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs
Cannot find module (UPS-MIB): At line 0 in (none)
UPS-MIB::upsTraps: Unknown Object Identifier
The above error happened after
Copied UPS-MIB.txt to /usr/share/snmp/mibs
I started snmptrapd:
snmptrapd -f -Lo -Dread-config -m ALL
The version of the Net-SNMP is 5.2.x.
The same procedures work fine with Ubuntu 18.04/Net-SNMP 5.3.7.
I wonder how to debug and fix the problem?
Besides the Net-SNMP version difference, on Ubuntu, I found an instruction to install mib-download-tool, and execute it after the installation of Net-SNMP, and comment out the lines beginning with min: in snmp.conf in order to fix the error of missing MIB's.
However, for the Centos, I had no need and found no such instruction, thus I have not done it yet, as there is no error message of missing MIB's.
The MIB file is downloaded from https://tools.ietf.org/rfc/rfc1628.txt
renamed to UPS-MIB.txt (It seems to me that the name of the MIB file does not matter, as long as it's unique? I tried to use a different names, upsMIB.txt, rfc1628.txt, but it does not help to improve.)
I solved the problem as follows:
manually copied /usr/share/snmp/mibs/ietf/UPS-MIB on an Ubuntu with Net-SNMP 5.7.3 installed to the Centos /usr/share/snmp/mibs/UPS-MIB
then restart the snmpd
by the command:
service snmpd restart
then the OID of UPS-MIB becomes visible and accessible.
Maybe, the version that I downloaded from https://tools.ietf.org/rfc/rfc1628.txt is not suitable??

PDOException' with message 'could not find driver'

Fatal error: Uncaught exception 'PDOException' with message 'could not find driver' in /home/kholifah/htdocs/cechcalk.ck/userAuth.php:22 Stack trace: #0 /home/kholifah/htdocs/cechcalk.ck/userAuth.php(22): PDO->__construct('?????pgsql:dbna...') #1 {main} thrown in /home/kholifah/htdocs/cechcalk.ck/userAuth.php on line 22
It looks like you are missing module called pdo_pgsql.
Look in your php.ini for a line extension=php_pdo_pgsql.dll. It should be uncommented.
On Ubuntu or other linux distribution you can install PDO driver with something like
apt-get install php5-pgsql.
I was also facing this problem into xampp now its works for me, May be it will helpful to you also in linux platform lamp or latest version of Php 5.6 - 7.
Just uncomment these lines from php.ini file
extension=pdo_pgsql
extension=pgsql
Example from my php.ini file
The earliest answers forgot to mention that you need to restart some services after enabling the following in php.ini file:
extension=pdo_pgsql
extension=pgsql
You need to restart your web server, in my case my using nginx, so I do
sudo systemctl restart nginx.service
After that I restart php-fpm using:
sudo systemctl restart php-fpm.service
Thats All...cheers

Context broker is not started: "su: user orion does not exist"

I'm trying to deploy contextBroker using the command /etc/init.d/contextBroker and I get the following error:
Starting...
contextBroker is stopped
Starting contextBroker... su: user orion does not exist
cat: /var/log/contextBroker/contextBroker.pid: No such file or directory
pidfile not found [FAILED]
Using the following command I can start contextBroker:
/usr/bin/contextBroker -port 10026 -logDir /var/log/contextBroker
-pidpath /var/log/contextBroker/contextBroker.pid -dbhost localhost -db orion
Which could be the cause of the problem?
There was a bug in the Orion RPM fixed in 0.16.0 that causes the removal of the "orion" user when updating the RPM package. The "orion" user is the one used by default by the /etc/init.d/contextBroker script, thus causing the error message su: user orion does not exist.
Note that although the bug has been fixed in 0.16.0, updating from 0.15.0 (for instance) to 0.16.0 will be problematic, as the version being updated (0.15.0) is still "buggy". Updating from 0.16.0 to any newer version (e.g. upcoming 0.17.0) should work without problem.
Fortunatelly, the problem has an easy solution: instead of updating the package, remove it and install again, typically with:
yum remove contextBroker
yum install contextBroker

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)