CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE when running guppy basecaller - guppy

I have tried to run the ONT basecaller guppy. I have run this code several times before without any issues. Now (following a reboot) it is producing the error message:
[guppy/error] main: CUDA error at /builds/ofan/ont_core_cpp/ont_core/common/cuda_common.cpp:203: CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE [guppy/warning] main: An error occurred in the basecaller. Aborting.
Is this a compatibility problem, and if so what can I do to solve it?
I'm using Ubuntu 18.04.4 LTS (GNU/Linux 5.4.0-72-generic x86_64)
and Guppy Basecalling Software, (C) Oxford Nanopore Technologies, Limited. Version 4.0.14+8d3226e, client-server API version 2.1.0
Here is my guppy code:
guppy_basecaller -i fast5/pass -r --device cuda:0 -s hac_fastqs_demul -c /opt/ont/ont-guppy/data/dna_r9.4.1_450bps_hac.cfg --num_callers 4 --require_barcodes_both_ends --trim_barcodes --detect_mid_strand_barcodes --barcode_kits "EXP-PBC001"

This issue was fixed by rebooting.

Related

MariaDB messages when stopping/starting any service

We have:
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
x86_64
Linux 4.19.0-17-amd64 #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64 GNU/Linux
mysql Ver 15.1 Distrib 10.6.4-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
When starting/stopping mariadb.service, the following message appears:
Failed to get properties: Unit name mariadb-extra#.socket is neither a valid invocation ID nor unit name.
Failed to get properties: Unit name mariadb#.socket is neither a valid invocation ID nor unit name.
The same messages pop up when starting/stopping any service.
What's wrong with mariadb user sockets? How can I remove these messages?
From MDEV-27715, the service implementation of Debian/Ubuntu is deficient in that it doesn't fully understand socket templated files of systemd.
Use systemctl.

lldb reports my remote aarch64 paltform as x86_64 (AOSP)

I'm trying to remotely debug a process on my aarch64 hardware. Why does the triplet start with x86_64? I would expect aarch64
(lldb) platform select remote-android
Platform: remote-android
Connected: no
(lldb) platform connect connect://localhost:5039
Platform: remote-android
Triple: x86_64-unknown-linux-android
OS Version: 30 (5.4.47-07670-gd50c0c10c465)
Hostname: localhost
Connected: yes
WorkingDir: /
Kernel: #136 SMP PREEMPT Thu Nov 26 11:09:46 EST 2020
My Android hardware is aarch64. I pushed lldb-server to the target with
adb push prebuilts/clang/host/linux-x86/clang-r383902b/runtimes_ndk_cxx/aarch64/lldb-server /data/local/tmp/lldb-server
Ran it with:
adb shell /data/local/tmp/lldb-server platform --listen "*:5039" --server
And connected with lldb (from prebuilts/clang/host/linux-x86/clang-r383902b/bin/lldb)
I am able to attach to processes on the target, even list processes (which show x86_64 triplets again to my confusing) but I can't add any symbols files, or target create. Those commands yield arch errors which is what led me back to the triplet in the platform command. (side note, I do build with -Wl,--build-id=sha1 -g -glldb)
When I see tutorials online, they're triplets report arm.
Notes:
Everything is done from the shell (no IDE),
Not running in Docker,
My hardware is rooted and even in permissive mode right now
Everything here is based on AOSP 11
clang tag: clang-r383902b
The cause of this was the clang version I was using, clang-r383902b. Setting my local manifest to pull from master and using clang-r399163b everything worked.
Specifically:
Extend the clang prebuid project in a local manifest
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="aosp" fetch="https://android.googlesource.com/" review="https://android-review.googlesource.com/" />
<extend-project path="prebuilts/clang/host/linux-x86" name="platform/prebuilts/clang/host/linux-x86" groups="trusty" revision="refs/heads/master" clone-depth="1" remote="aosp" />
</manifest>
Sync: repo sync -d prebuilts/clang/host/linux-x86
Re-push: CLANG_TAG=clang-r399163b adb push prebuilts/clang/host/linux-x86/${CLANG_TAG}/runtimes_ndk_cxx/aarch64/lldb-server /data/local/tmp/lldb-server (or whatever the newest tag is)
Re-run lldb-server
I experimented with using the newest lldb-server with the existing lldb and it still worked.

How to debug the problem not able to translate OID with a new MIB file for UPS-MIB?

On Centos, I ran into the following error:
sudo snmptrap -v 2c -c read localhost '' UPS-MIB::upsTraps
MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs
Cannot find module (UPS-MIB): At line 0 in (none)
UPS-MIB::upsTraps: Unknown Object Identifier
The above error happened after
Copied UPS-MIB.txt to /usr/share/snmp/mibs
I started snmptrapd:
snmptrapd -f -Lo -Dread-config -m ALL
The version of the Net-SNMP is 5.2.x.
The same procedures work fine with Ubuntu 18.04/Net-SNMP 5.3.7.
I wonder how to debug and fix the problem?
Besides the Net-SNMP version difference, on Ubuntu, I found an instruction to install mib-download-tool, and execute it after the installation of Net-SNMP, and comment out the lines beginning with min: in snmp.conf in order to fix the error of missing MIB's.
However, for the Centos, I had no need and found no such instruction, thus I have not done it yet, as there is no error message of missing MIB's.
The MIB file is downloaded from https://tools.ietf.org/rfc/rfc1628.txt
renamed to UPS-MIB.txt (It seems to me that the name of the MIB file does not matter, as long as it's unique? I tried to use a different names, upsMIB.txt, rfc1628.txt, but it does not help to improve.)
I solved the problem as follows:
manually copied /usr/share/snmp/mibs/ietf/UPS-MIB on an Ubuntu with Net-SNMP 5.7.3 installed to the Centos /usr/share/snmp/mibs/UPS-MIB
then restart the snmpd
by the command:
service snmpd restart
then the OID of UPS-MIB becomes visible and accessible.
Maybe, the version that I downloaded from https://tools.ietf.org/rfc/rfc1628.txt is not suitable??

During start of Docker I get this message: "getting the final child's pid from pipe caused "read init-p: connection reset by peer"

I have Docker installed under CentOS Linux 7.6.1810 and Plesk Onyx 17.8.11, and everything was fine. For a few hours I can't start mongoDB or Docker anymore.
I get this error message
{"message":"OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"}
What could it be?
I have fixed it, I downgraded containerd.io to the version 1.2.0 and Docker is running.
Docker-ce 18.09.2 + Linux Kernel 3.10.0 produces the same problem as you. If we want to use Docker-ce 18.09.2, Linux Kernel 4.x+ is required.

Problems importing with Mongodb: fatal error: MSpanList_Insert

I'm running into a fatal error when I'm trying to import using Mongodb 3.3.9. My script has worked before, but when I upgraded my Mac os to Sierra, I'm running into what looks like a Go language problem.
Error received :
fatal error: MSpanList_Insert
runtime stack: runtime.MSpanList_Insert(0x491d30, 0x54daf0)
/usr/local/go/src/runtime/mheap.c:692 +0x8f
runtime.MHeap_Alloc(0x491cc0, 0x2, 0x10000000026, 0xdbc9)
/usr/local/go/src/runtime/mheap.c:240 +0x66
runtime.MCentral_CacheSpan(0x49b0b8, 0x34872)
/usr/local/go/src/runtime/mcentral.c:85 +0x167
runtime.MCache_Refill(0x527c20, 0xc200000026, 0x5550b8)
/usr/local/go/src/runtime/mcache.c:90 +0xa0
Others have noted a similar problem that was supposed to be resolved in an earlier version (mongorestore random crash (fatal error)), but my problem persists.
As the comments suggested, it's solved reinstalling mongo. If you install it using brew execute: brew uninstall mongo.
If you just followed the steps in their tutorial, just delete the executable. If you don't know how, follow this:
which mongo
#now you have a path
rm -rf yourMongoPath
sudo launchctl unload /System/Library/LaunchDaemons/org.ntp.ntpd.plist
worked for me