clapack.so: undefined symbol: clapack_sgesv on RHEL - scipy

I'm getting this error when importing scipy.stats:
import scipy.stats
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/scipy/stats/__init__.py", line 322, in <module>
from stats import *
File "/usr/lib64/python2.6/site-packages/scipy/stats/stats.py", line 194, in <module>
import scipy.linalg as linalg
File "/usr/lib64/python2.6/site-packages/scipy/linalg/__init__.py", line 116, in <module>
from basic import *
File "/usr/lib64/python2.6/site-packages/scipy/linalg/basic.py", line 12, in <module>
from lapack import get_lapack_funcs
File "/usr/lib64/python2.6/site-packages/scipy/linalg/lapack.py", line 15, in <module>
from scipy.linalg import clapack
ImportError: /usr/lib64/python2.6/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv
Looks like clapack.so links to the full, ATLAS version of libatlas:
ldd /usr/lib64/python2.6/site-packages/scipy/linalg/clapack.so
linux-vdso.so.1 => (0x00007fff232e6000)
liblapack.so.3 => /usr/lib64/liblapack.so.3 (0x00007f23b8ad7000)
libptf77blas.so.3 => /usr/lib64/atlas/libptf77blas.so.3 (0x00007f23b88b7000)
libptcblas.so.3 => /usr/lib64/atlas/libptcblas.so.3 (0x00007f23b8697000)
libatlas.so.3 => /usr/lib64/atlas/libatlas.so.3 (0x00007f23b8120000)
libpython2.6.so.1.0 => /usr/lib64/libpython2.6.so.1.0 (0x00007f23b7d65000)
libgfortran.so.3 => /usr/lib64/libgfortran.so.3 (0x00007f23b7a73000)
libm.so.6 => /lib64/libm.so.6 (0x00007f23b77da000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f23b75c3000)
libc.so.6 => /lib64/libc.so.6 (0x00007f23b7232000)
libblas.so.3 => /usr/lib64/libblas.so.3 (0x00007f23b6fdb000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f23b6dbd000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f23b6bb9000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f23b69b6000)
/lib64/ld-linux-x86-64.so.2 (0x00000032a2200000)
Any ideas?

Related

pynestkernel ImportError: libmpi_cxx.so.20: cannot open shared object file: No such file or directory

When installing NEST 2.18 with:
cmake \
-Dwith-mpi=/usr/lib/x86_64-linux-gnu/openmpi \
-Dwith-python=3 \
-DPYTHON_EXECUTABLE=/home/robin/.pyenv/versions/3.8.6/bin/python \
-DPYTHON_LIBRARY=/home/robin/.pyenv/versions/3.8.6/lib/libpython3.8.so \
-DPYTHON_INCLUDE_DIR=/home/robin/.pyenv/versions/3.8.6/include/python3.8/ \
-DCMAKE_INSTALL_PREFIX=/home/robin/nest-install \
..
It seems that NEST 2.18 tries to look for libmpi_cxx.so.20 even though it doesn't exist and isn't part of the installed mpi lib
$ ldd nest-install/lib/python3.8/site-packages/nest/pynestkernel.so
linux-vdso.so.1 (0x00007fff3bb37000)
libpython3.8.so.1.0 => /home/robin/.pyenv/versions/3.8.6/lib/libpython3.8.so.1.0 (0x00007feaa1401000)
libnest.so => /nest/2.18/lib/libnest.so (0x00007feaa11c1000)
libmodels.so => /nest/2.18/lib/libmodels.so (0x00007feaa0930000)
libtopology.so => /nest/2.18/lib/libtopology.so (0x00007feaa0691000)
libnestkernel.so => /nest/2.18/lib/libnestkernel.so (0x00007feaa0335000)
librandom.so => /nest/2.18/lib/librandom.so (0x00007feaa00e9000)
libsli.so => /nest/2.18/lib/libsli.so (0x00007fea9fd9a000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fea9fba1000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fea9fb86000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fea9f994000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fea9f971000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fea9f969000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007fea9f964000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fea9f815000)
libprecise.so => /nest/2.18/lib/libprecise.so (0x00007fea9f592000)
libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 (0x00007fea9f587000)
libmpi_cxx.so.20 => /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so.20 (0x00007fea9f567000)
libmpi.so.20 => not found
libnestutil.so => /nest/2.18/lib/libnestutil.so (0x00007fea9f363000)
libgsl.so.23 => /usr/lib/x86_64-linux-gnu/libgsl.so.23 (0x00007fea9f0e7000)
libgslcblas.so.0 => /usr/lib/x86_64-linux-gnu/libgslcblas.so.0 (0x00007fea9f0a5000)
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007fea9f063000)
/lib64/ld-linux-x86-64.so.2 (0x00007feaa180a000)
libmpi.so.20 => not found
libmpi.so.20 => not found
libmpi.so.20 => not found
libmpi.so.20 => not found
libmpi.so.40 => /usr/local/lib/libmpi.so.40 (0x00007fea9ed39000)
libopen-rte.so.40 => /usr/local/lib/libopen-rte.so.40 (0x00007fea9ea83000)
libopen-pal.so.40 => /usr/local/lib/libopen-pal.so.40 (0x00007fea9e76b000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fea9e760000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fea9e744000)
I've tried to change all of the cmake variables using ccmake but I can't get it to link to libmpi_cxx.so.40.
Even without MPI support it includes this link, seems like a bug?
robin#robin-ZenBook-UX533FN:~$ ldd nest-install/lib/python3.8/site-packages/nest/pynestkernel.so
linux-vdso.so.1 (0x00007ffe63518000)
libpython3.8.so.1.0 => /home/robin/.pyenv/versions/3.8.6/lib/libpython3.8.so.1.0 (0x00007fa1e3c37000)
libnest.so => /nest/2.18/lib/libnest.so (0x00007fa1e39f7000)
libmodels.so => /nest/2.18/lib/libmodels.so (0x00007fa1e3166000)
libtopology.so => /nest/2.18/lib/libtopology.so (0x00007fa1e2ec7000)
libnestkernel.so => /nest/2.18/lib/libnestkernel.so (0x00007fa1e2b6b000)
librandom.so => /nest/2.18/lib/librandom.so (0x00007fa1e291f000)
libsli.so => /nest/2.18/lib/libsli.so (0x00007fa1e25d0000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fa1e23d7000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fa1e23bc000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa1e21ca000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fa1e21a7000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fa1e219f000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007fa1e219a000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fa1e204b000)
libprecise.so => /nest/2.18/lib/libprecise.so (0x00007fa1e1dc8000)
libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 (0x00007fa1e1dbd000)
libmpi_cxx.so.20 => not found
libmpi.so.20 => not found
libnestutil.so => /nest/2.18/lib/libnestutil.so (0x00007fa1e1bb7000)
libgsl.so.23 => /usr/lib/x86_64-linux-gnu/libgsl.so.23 (0x00007fa1e193b000)
libgslcblas.so.0 => /usr/lib/x86_64-linux-gnu/libgslcblas.so.0 (0x00007fa1e18f9000)
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007fa1e18b7000)
/lib64/ld-linux-x86-64.so.2 (0x00007fa1e4040000)
libmpi_cxx.so.20 => not found
libmpi.so.20 => not found
libmpi_cxx.so.20 => not found
libmpi.so.20 => not found
libmpi_cxx.so.20 => not found
libmpi.so.20 => not found
libmpi_cxx.so.20 => not found
libmpi.so.20 => not found
The full error when importing it is:
>>> import nest
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/robin/nest-install/lib/python3.8/site-packages/nest/__init__.py", line 26, in <module>
from . import ll_api # noqa
File "/home/robin/nest-install/lib/python3.8/site-packages/nest/ll_api.py", line 72, in <module>
from . import pynestkernel as kernel # noqa
ImportError: libmpi_cxx.so.20: cannot open shared object file: No such file or directory

I am trying to kill the dsmcad service, but getting "global name 'dsmcad' is not defined"

import os
import signal
from subprocess import check_output
def get_pid(name):
return check_output(["pidof", name])
def main():
os.kill(get_pid(dsmcad), signal.SIGTERM) #or signal.SIGKILL
if __name__ == "__main__":
main()
Getting error:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<stdin>", line 2, in main
NameError: global name 'dsmcad' is not defined

Why can't I initialize a Python queue twice in the same program?

When my program is the following...
import queue
queue = queue.Queue()
queue = None
queue = queue.Queue()
...my output is the following:
AttributeError: 'NoneType' object has no attribute 'Queue'
But when my program is the following...
import queue
queue = queue.Queue()
queue = None
...no error messages are thrown.
Why is this the case? I need to reinitialize my queue.
When you imported the module queue, you actually created a variable queue referencing an object of type module.
Then, when you created a queue named queue, you redefined the variable queue to be an object of type queue.Queue.
No wonder why you could not call queue.Queue() after that!
QED.
See in details:
>>> import queue
>>> type(queue)
<class 'module'>
>>> # Here you redefine the variable queue: the module queue won't be accessible after that
>>> queue = queue.Queue()
>>> type(queue)
<class 'queue.Queue'>
>>> queue
<queue.Queue object at ***>
>>> # Here I try to call Queue() on an object of type Queue...
>>> queue = queue.Queue()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Queue' object has no attribute 'Queue'
>>> queue = None
>>> # And here I try to call Queue() on an object of type None...
>>> queue = queue.Queue()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'Queue'

PySpark: 'ResultIterable' object has no attribute 'request_tm'

I use pyspark to deal with data. The data is as below:
8611060350280948828b33be803 4363 2017-10-01
8611060350280948828b33be803 4363 2017-10-02
4e5556e536714363b195eb8f88becbf8 365 2017-10-01
4e5556e536714363b195eb8f88becbf8 365 2017-10-02
4e5556e536714363b195eb8f88becbf8 365 2017-10-03
4e5556e536714363b195eb8f88becbf8 365 2017-10-04
I created a class to store these data. The codes are as following:
class LogInfo:
def __init__(self, session_id, sku_id, request_tm):
self.session_id = session_id
self.sku_id = sku_id
self.request_tm = request_tm
The dealing codes are as following:
from classFile import LogInfo
from pyspark import SparkContext, SparkConf
conf = SparkConf().setMaster("local[*]")
sc = SparkContext(conf=conf)
orgData = sc.textFile(<dataPath>)
readyData = orgData.map(lambda x: x.split('\t')).\
filter(lambda x: x[0].strip() != "" and x[1].strip() != "" and x[2].strip() != "").\
map(lambda x: LogInfo(x[0], x[1], x[2])).groupBy(lambda x: x.session_id).\
filter(lambda x: len(x[1]) > 3).filter(lambda x: len(x[1]) < 20).\
map(lambda x: x[1]).sortBy(lambda x:x.request_tm).map(lambda x: x.sku_id)
But the codes didn't work. The mistake information is as below:
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-
hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 177, in main
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-
hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 172, in process
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-
hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
return func(split, prev_func(split, iterator))
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
return func(split, prev_func(split, iterator))
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
return func(split, prev_func(split, iterator))
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 346, in func
return f(iterator)
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 1041, in <lambda>
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 1041, in <genexpr>
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2053, in <lambda>
return self.map(lambda x: (f(x), x))
File
"D:<filePath>", line 15, in <lambda>
map(lambda x: x[1]).sortBy(lambda x:x.request_tm).map(lambda x: x.sku_id)
AttributeError: 'ResultIterable' object has no attribute 'request_tm'
at
org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>
(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
[Stage 1:> (0 + 5) /
10]17/12/01 17:54:15 WARN TaskSetManager: Lost task 3.0 in stage 1.0 (TID 13, localhost, executor driver): org.apache.spark.api.python.PythonException:
Traceback (most recent call last):
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 177, in main
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 172, in process
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
return func(split, prev_func(split, iterator))
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
return func(split, prev_func(split, iterator))
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
return func(split, prev_func(split, iterator))
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 346, in func
return f(iterator)
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 1041, in <lambda>
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 1041, in <genexpr>
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2053, in <lambda>
return self.map(lambda x: (f(x), x))
File
"D:<filePath>", line 15, in <lambda>
map(lambda x: x[1]).sortBy(lambda x:x.request_tm).map(lambda x: x.sku_id)
AttributeError: 'ResultIterable' object has no attribute 'request_tm'
........
I think the main mistake information is as above. I could't figure out where I made mistake. Could anybody help? Thank you very much!
I think you need to replace this:
map(lambda x: x[1])
with this:
flatMap(lambda x: list(x[1]))
Basically, after the groupBy, x[1] is a "ResultIterable" object so if you want to sort each element of it, you first need to flaten it.
Edit:
If you need a list of sku_id inside the rdd then:
.map(lambda x: [y.sku_id for y in sorted(list(x[1]), key=lambda x: x.request_tm)])

How can I determine if Akka bytestring contains a given substring?

Given a sample text file, how can one use Akka ByteStrings and either convert it to plain text or run a "find" on the ByteString itself?
val file = new File("sample.txt")
val fileSource = SynchronousFileSource(file, 4096)
val messageStream = fileSource.map(chunk => sendMessage(chunk.toString()))
messageStream.to(Sink.foreach(println(_))).run
The "toString()" functionality above literally spits out a string containing the text "ByteString", followed by bytes represented as integers. For example:
chunk.toString() ==> "ByteString(111, 112, 119, 111)"
You can use containsSlice to find sub ByteString.
scala> import akka.util.ByteString;
import akka.util.ByteString
scala> val target = ByteString("hello world");
target: akka.util.ByteString = ByteString(104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100)
scala> val sub = ByteString("world")
sub: akka.util.ByteString = ByteString(119, 111, 114, 108, 100)
scala> target.containsSlice(sub)
res0: Boolean = true
If you want to convert akka.util.ByteString to String, you can use decodeString
scala> ByteString("hello").decodeString("UTF-8")
res3: String = hello
See the doc for more detail: http://doc.akka.io/api/akka/2.3.13/index.html#akka.util.ByteString