numba caching issue: cannot cache function / no locator available for file - numba

I am trying to deploy a codebase that has a number numba.njit functions with cache=True.
It works fine running locally (Mac OS X 10.12.3), but on the remote machine (Ubuntu 14.04 on AWS) I am getting the following error:
RuntimeError at /portal/
cannot cache function 'filter_selection':
no locator available for file:
'/srv/run/miniconda/envs/mbenv/lib/python2.7/site-packages/mproj/core_calcs/filter.py'
I looked through the numba codebase, and I saw this file: https://github.com/numba/numba/blob/master/numba/caching.py
It appears that the following function is returning None instead of a locator, for this exception to be raised
cls.from_function(py_func, source_path)
Guessing this is a permission to write the pycache folders, but I didn't see in the numba docs a way to specify the cache folder location (CACHE_DIR).
Has anyone hit this before, and if so, what is the suggested work-around?

Set sys.frozen = True before for cls in self._locator_classes: in caching.py can eliminate the issue.
I have no idea whether such setting will impact performance.

Related

Problems when using Chapel 1.19 along with GASNet PSM (OmniPath) substrate

After Changing to version 1.19, but using Omnipath implementation, I'm randomly receiving the following error: ERROR calling: gasnet_barrier_try(id, 0).
I know that the Omnipath implementation of GASNet is no longer supported by the current version of Chapel. However, I would like to use some features available only in version 1.19, and the cluster I use runs over an Omnipath network.
In order to use the PSM substrate (OmniPath), I proceed as suggested by Chapel's Gitter community:
export CHPL_GASNET_ALLOW_BAD_SUBSTRATE=true
wget https://gasnet.lbl.gov/download/GASNet-1.32.0.tar.gz
tar xzf GASNet-1.32.0.tar.gz
rm -rf $CHPL_HOME/third-party/gasnet/gasnet-src
mv GASNet-1.32.0 $CHPL_HOME/third-party/gasnet/gasnet-src
Then, I setup other variables:
export CHPL_COMM='gasnet'
export CHPL_LAUNCHER='gasnetrun_psm'
export CHPL_COMM_SUBSTRATE='psm'
export CHPL_GASNET_SEGMENT='everything'
export CHPL_TARGET_CPU='native'
export GASNET_PSM_SPAWNER='ssh'
export HFI_NO_CPUAFFINITY=1
Next, I build the runtime, etc.
However, when I run experiments, I randomly receive the following error:
ERROR calling: gasnet_barrier_try(id, 0)
at: comm-gasnet.c:1020
error: GASNET_ERR_BARRIER_MISMATCH (Barrier id's mismatched)
Which finishes the execution of the program.
I cannot find in GASNet documentation the reason for this error. I could only find a bit of information on GASNet's code.
Do you know what's the cause of this problem?
Thank you all.
I realize this is an old question, but for the record the current version of Chapel (1.28.0) now embeds a version of GASNet (GASNet-EX 2022.3.0 as of this writing) that provides CHPL_COMM=gasnet CHPL_COMM_SUBSTRATE=ofi (aka GASNet ofi-conduit) that provides high-quality support for Intel Omni-Path.
In particular, there should no longer be any reason to clobber Chapel's embedded version of GASNet-EX with an ancient/outdated GASNet-1 to get Omni-Path support, as suggested in the original question.
For more details see Chapel's detailed Omni-Path instructions.

Why does BitBake error if it can't find www.example.com?

BitBake fails for me because it can't find https://www.example.com.
My computer is an x86-64 running native Xubuntu 18.04. Network connection is via DSL. I'm using the latest versions of the OpenEmbedded/Yocto toolchain.
This is the response I get when I run BitBake:
$ bitbake -k core-image-sato
WARNING: Host distribution "ubuntu-18.04" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.
ERROR: OE-core's config sanity checker detected a potential misconfiguration.
Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
Following is the list of potential problems / advisories:
Fetcher failure for URL: 'https://www.example.com/'. URL https://www.example.com/ doesn't work.
Please ensure your host's network is configured correctly,
or set BB_NO_NETWORK = "1" to disable network access if
all required sources are on local disk.
Summary: There was 1 WARNING message shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
The networking issue, the reason why I can't access www.example.com, is a question for the SuperUser forum. My question here is, why does BitBake rely on the existence of www.example.com? What is it about that website that is so vital to BitBake's operation? Why does BitBake post an Error if it cannot find https://www.example.com?
At this time, I don't wish to set BB_NO_NETWORK = "1". I would rather understand and resolve the root cause of the problem first.
Modifying poky.conf didn't work for me (and from what I read, modifying anything under Poky is a no-no for a long term solution).
Modifying /conf/local.conf was the only solution that worked for me. Simply add one of the two options:
#check connectivity using google
CONNECTIVITY_CHECK_URIS = "https://www.google.com/"
#skip connectivity checks
CONNECTIVITY_CHECK_URIS = ""
This solution was originally found here.
For me, this appears to be a problem with my ISP (CenturyLink) not correctly resolving www.example.com. If I try to navigate to https://www.example.com in the browser address bar I just get taken to the ISP's "this is not a valid address" page.
Technically speaking, this isn't supposed to happen, but for whatever reason it does. I was able to work around this temporarily by modifying the CONNECTIVITY_CHECK_URIS in poky/meta-poky/conf/distro/poky.conf to something that actually resolves:
# The CONNECTIVITY_CHECK_URI's are used to test whether we can succesfully
# fetch from the network (and warn you if not). To disable the test set
# the variable to be empty.
# Git example url: git://git.yoctoproject.org/yocto-firewall-test;protocol=git;rev=master
CONNECTIVITY_CHECK_URIS ?= "https://www.google.com/"
See this commit for more insight and discussion on the addition of the www.example.com check. Not sure what the best long-term fix is, but the change above allowed me to build successfully.
If you want to resolve this issue without modifying poky.conf or local.conf or any of the files for that matter, just do:
$touch conf/sanity.conf
It is clearly written in meta/conf/sanity.conf that:
Expert users can confirm their sanity with "touch conf/sanity.conf"
If you don't want to execute this command on every session or build, you can comment out the line INHERIT += "sanity" from meta/conf/sanity.conf, so the file looks something like this:
Had same issue with Bell ISP when accessing example.com gave DNS error.
Solved by switching ISP's DNS IP to Google's DNS (to avoid making changes to configs):
https://developers.google.com/speed/public-dns/docs/using

Azure batch Application package not getting copied to Working Directory of Task

I have created Azure Batch pool with Linux Machine and specified Application Package for the Pool.
My command line is
command='python $AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py',
python3: can't open file '$AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py':
[Errno 2] No such file or directory
when i connect to node and look at working directory non of the Application Package files are present there.
How do i make sure that files from Application Package are available in working directory or I can invoke/execute files under Application Package from command line ?
Make sure that your async operation have proper await in place before you start using the package in your code.
Also please share your design \ pseudo-code scenario and how you are approaching it as a design?
Further to add:
Seems like this one is pool level package.
The error seems like that the application env variable is either incorrectly used or there is some other user level issue. Please checkout linmk below and specially the section where use of env variable is mentioned.
This seems like user level issue because In case of downloading the package resource, if there will be an error it will be visible to you via exception handler or at the tool level is you are using batch explorer \ Batch-labs or code level exception handling.
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Reason \ Rationale:
If the pool level or the task application has error, an error-list will come back if there was an error in the application package then it will be returned as the UserError or and AppPackageError which will be visible in the exception handle of the code.
Key you can always RDP into your node and checkout the package availability: information here: https://learn.microsoft.com/en-us/azure/batch/batch-api-basics#connecting-to-compute-nodes
I once created a small sample to help peeps around so this resource might help you to checkeout the use here.
Hope rest helps.
On Linux, the application package with version string is formatted as:
AZ_BATCH_APP_PACKAGE_{0}_{1}
On Windows it is formatted as:
AZ_BATCH_APP_PACKAGE_APPLICATIONID#version
Where 0 is the application name and 1 is the version.
$AZ_BATCH_APP_PACKAGE_scriptv1_1 will take you to the root folder where the application was unzipped.
Does this "exact" path exist in that location?
tasks/XXX/get_XXXXX_data.py
You can see more information here:
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Edit: Just saw this question: "or can I invoke/execute files under Application Package from command line"
Yes you can invoke and execute files from the application package directory with the environment variable above.
If you type env on the node you will see the environment variables that have been set.

How does one access the configuration from inside a IPython 3.x / Jupyter Notebook?

In particular I would like to know the base_url of the Notebook Server that the code is running in.
In IPython Notebooks version 2.x I used to do the following:
config = get_ipython().config
print config['NotebookApp']['base_url']
However this no longer works in IPython Notebook 3.x / Jupyter Notebooks.
EDIT: Some more detail on what I am trying to achieve.
I run various IPython Servers in separate Docker containers on the same host which are accessed through different base_urls. I would like to use the quantopian/qgrid package to display Pandas DataFrames inside the Notebook. Initially qgrid did not handle custom base_url prefixes for serving up a local copy of the Javascript dependencies but the code above allowed me to find the base_url in IPython 2 and to inject the relevant base_url into the Javascript template.
I would also like to use the mpld3 library in the Notebook and when browsing their documentation I found that they also mention that in "IPython 2.0+, local=True may fail if a url prefix is added (e.g. by setting NotebookApp.base_url)" so it seems that this is not an isolated problem and a good solution would be worthwhile.
Given #matt's comment below and thinking more about kernel vs frontend split, it makes sense that the NotebookApp config isn't accessible from the kernel. It's really the JS code that's generated that needs to know what the base_url is, so if someone can point me to where I can access this in the Notebook JS API, that should solve it.
From the frontend side, if you publish javasscript, and assuming you are in a notebook (keep in mind that being in JS does not necessary mean notebook, you could be Atom-Hydrogen, or Jupyter-Sidecar) you can use a snipet like:
require(['base/js/utils'], function(utils){
var base_url = utils.get_body_data('base-url')
})
The data-base-url attribute is set on the <body> tag of the notebook.
It is though not guarantied to stay this way. Usually, extension should be installed in the nbextensions folder, which should automatically resolve correctly:
require.config({
...
paths: {
nbextensions : '<base url>/nbextensions',
kernelspecs : '<base url>/kernelspecs',
...
})
Nbextension is a search path, so if set correctly on the server, you shouldn't (most of the time) have to serve things yourself at custom URLs, nor to handle base_url yourself on frontend side.
After quite a lot of digging into IPython internals I found something that works for me:
from IPython.config.loader import load_pyconfig_files
config = get_ipython().config
profiledir = config['ProfileDir']['location']
nbconfig = load_pyconfig_files(['ipython_notebook_config.py'], profiledir)
print nbconfig['NotebookApp']['base_url']
EDIT: This works on my installation but I understand now that the kernel is not really the right place to get this info. I'll probably delete this answer once some better answers are up.

Issue with Informix (ifx_connect)

Hi I have a problem after installed informix client sdk (Ref : http://www.debian-administration.org/article/651/Connect_to_Informix_using_PHP5_on_Lenny_x86_64)
OS : CentOS
Here is the .php file that i use to connect
$db_conn = ifx_connect("dbname#IPHost","user","pass");
There is some error here,
Warning: ifx_connect() [function.ifx-connect]: E [SQLSTATE=IX 001 SQLCODE=-1829] in /var/www/html/index.php on line 5
is anyone know the solution ?
Thanks
The way you find more about errors from Informix is often:
$ finderr -1829
-1829 Cannot open file citoxmsg.pam.
The file citoxmsg.pam is missing from the directory $INFORMIXDIR/msg.
If this error occurs, note all circumstances and contact IBM Technical Support.
$
(Give or take some blank lines.) The finderr command is found in $INFORMIXDIR/bin. You need $INFORMIXDIR set in the environment unless /usr/informix is correct - it could be a symlink to the actual software directory.
There are two possibilities:
You have not got INFORMIXDIR set in the environment when PHP is run, and/or the php.ini file does not define a value for $INFORMIXDIR, or the value is set incorrectly, or a default (quite possibly /usr/informix) is being used but the software is not installed there.
The installation is not complete - the relevant message file is missing as noted.
Of the two, I think reason 1 is much the more likely.
The IX001 value for SQLSTATE is of minimal use - it is the generic 'something went wrong with Informix' message. The SQLCODE is much more significant and helpful.