Unable to use Diazo (plone.app.theming) on Centos - centos

I made a webportal on my mac using plone4.1 and Diazo.
Now, I'm trying to deploy it on my server (CentOs) where there is yet another site with plone4.0.5 + collectivexdv.
When I run the site (in a brand new buildout) with my diazotheme I obtain this lines via shell (instance fg):
2011-09-27 09:32:10 ERROR plone.transformchain Unexpected error whilst trying to apply transform chain
Traceback (most recent call last):
File "/home/plone/.buildout/eggs/plone.transformchain-1.0-py2.6.egg/plone/transformchain/transformer.py", line 42, in __call__
newResult = handler.transformIterable(result, encoding)
File "/home/plone/.buildout/eggs/plone.app.theming-1.0b8-py2.6.egg/plone/app/theming/transform.py", line 205, in transformIterable
transform = self.setupTransform()
File "/home/plone/.buildout/eggs/plone.app.theming-1.0b8-py2.6.egg/plone/app/theming/transform.py", line 150, in setupTransform
xsl_params=xslParams,
File "/home/plone/.buildout/eggs/diazo-1.0rc3-py2.6.egg/diazo/compiler.py", line 106, in compile_theme
read_network=read_network,
File "/home/plone/.buildout/eggs/diazo-1.0rc3-py2.6.egg/diazo/rules.py", line 160, in process_rules
rules_doc = fixup_themes(rules_doc)
File "/home/plone/.buildout/eggs/diazo-1.0rc3-py2.6.egg/diazo/utils.py", line 49, in __call__
result = self.xslt(*args, **kw)
File "xslt.pxi", line 568, in lxml.etree.XSLT.__call__ (src/lxml/lxml.etree.c:120289)
XSLTApplyError: xsltValueOf: text copy failed
What's the matter?

I had the exact same problem and it's due an old libxml2/libxslt. Add these lines on your buildout:
[buildout]
parts =
lxml # keep lxml as the first one!
...
instance
[lxml]
recipe = z3c.recipe.staticlxml
egg = lxml
libxml2-url = ftp://xmlsoft.org/libxml2/libxml2-2.7.8.tar.gz
libxslt-url = ftp://xmlsoft.org/libxml2/libxslt-1.1.26.tar.gz
static-build = true
force = false

See Plone - XSLTApplyError: xsltValueOf: text copy failed. Probably you have an outdated libxml, as it is always the case with an old distribution like CentOS.
Use z3c.recipe.staticlxml.

It sounds like you might have overly old versions of libxml2 and/or libxslt. I encountered identical problems with libxml2 2.6.26 and libxslt 1.1.17. After upgrading to 2.7.8 and 1.2.26 (respectively) the problems went away.
If you can't upgrade the libraries locally, you can move forward quite quickly using the "z3c.recipe.staticlxml" recipe in your buildout:
[lxml]
recipe = z3c.recipe.staticlxml
egg = lxml
Just remember to delete any existing lxml egg in the eggs directory (or possibly in your ~/.buildout/eggs cache, depending on how your ~/.buildout/default.cfg it set up) first.

I just got this to work using Plone 4.2.1 on OS X 10.8 Server but only once I used the unified installer. I bumped up the libxml2 to version 2.8.0. At the time I tried this, libxml2 version 2.9.0 was broken for OS X 10.8.

Related

How to solve numpy import error when calling Anaconda env from Matlab

I want to execute a Python script from Matlab (on a Windows 7 machine). The libraries necessary are installed in an Anaconda virtual environment. When running the script from command line, it runs flawlessly.
When calling the script from Matlab as follows:
[status, commandOut] = system('C:/Users/user/AppData/Local/Continuum/anaconda3/envs/tf/python.exe test.py');
or with shell commands, I get an Import Error:
commandOut =
'Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\core\__init__.py", line 16, in <module>
from . import multiarray
ImportError: DLL load failed: The specified path is invalid.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 2, in <module>
import numpy as np
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: The specified path is invalid.
I already changed the default Matlab Python version to the Anaconda env, but no change:
version: '3.5'
executable: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\python.exe'
library: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\python35.dll'
home: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf'
isloaded: 1
Just running my test script without importing numpy works. Reloading numpy (py.importlib.import_module('numpy');) didn't work but threw the same error as before.
Does anyone have an idea how to fix this?
So after corresponding with Matlab support I found out that Matlab depends on the path environment (paths which are deliberately not set when using a virtual environment) and therefore numpy fails to find the necessary paths when called from within Matlab (even if the call contains the path to the virtual environment).
The solution is either to call Matlab from within the virtual environment (via command line) or add the missing paths manually in the path environment.
Maybe this information can help someone else.
First Method
You can change the python interpreter with:
pyversion("/home/nibalysc/Programs/anaconda3/bin/python");
And check it with:
pyversion();
You could also do this in a
startup.m
file in your project folder and every time you start MATLAB from this folder the python interpreter will be changed automatically.
Now you can try to use:
py.importlib.import_module('numpy');
Read up the documentation on how to use the integrated python in MATLAB:
Call user defined custom module
Call modified python module
Alternative Method
Alternative method would be to create a
matlab_shell.sh
file with following content, this is basically the appended code from .bashrc when anaconda is installed and asks you if the installer should modify the .bashrc file:
#!/bin/bash
__conda_setup="$(CONDA_REPORT_ERRORS=false '$HOME/path/to/anaconda3/bin/conda' shell.bash hook 2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f "$HOME/path/to/anaconda3/etc/profile.d/conda.sh" ]; then
CONDA_CHANGEPS1=false conda activate base
else
\export PATH="$HOME/path/to/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda init <<<
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('$HOME/path/to/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "$HOME/path/to/anaconda3/etc/profile.d/conda.sh" ]; then
. "$HOME/path/to/anaconda3/etc/profile.d/conda.sh"
else
export PATH="$HOME/path/to/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
conda activate base
eval $2
Then you need to set the MATLAB_SHELL environment variable either before running MATLAB or in MATLAB itself. The best thing in my opinion would be to do it also in the startup.m file like that:
setenv("MATLAB_SHELL", "/path/to/matlab_shell.sh");
Afterwards you can use the system(...) function to run conda python with all your modules installed like that...
String notation:
system("python -c ""python code goes here"");
Char notation:
system('python -c "python code goes here"');
Hope this helps!
Firstly, if you execute your Python script like a regular system command ([status, commandOut] = system('...python.exe test.py'))
the pyversion (and pyenv, since R2019b) got no effect at all. It only matters if you utilize the py. integration, as in the code below (and, in most cases, this is a way better approach).
Currently (I use R2019b update 5) there's a number of pitfalls, that might cause issues similar to yours. I'd recommend to start from the following:
Create a new clean conda environment:
conda create -n test_py36 python=3.6 numpy
Create the following dummy demo1.py:
def dummy_py_method(x):
return x+1
Create the following run_py_code.m:
function run_py_code()
% explicit module import sometimes show more detailed error messages
py.importlib.import_module('numpy');
% to reloads if there would be any changes:
pymodule = py.importlib.import_module('demo1');
py.importlib.reload(pymodule);
% passing data back and forth
x = rand([3 3]);
x_np = py.numpy.array(x);
y_np=pymodule.dummy_py_method(x_np);
y = double(y_np);
disp(y-x);
Create the following before_first_run.m:
setenv('PYTHONUNBUFFERED','1');
setenv('path',['C:\Users\username\Anaconda3\envs\test_py36\Library\bin;'...
getenv('path')]);
pe=pyenv('Version','C:\users\username\Anaconda3\envs\test_py36\pythonw.exe',...
'ExecutionMode','InProcess'...
);
% add "demo1.py" to path
py_file_path = 'W:\tests\Matlab\python_demos\call_pycode\pycode';
if count(py.sys.path,py_file_path) == 0
insert(py.sys.path,int32(0),py_file_path);
end
Run the before_first_run.m first and run the run_py_code.m next.
Notes:
As already mentioned in this answer, one key point is to add the folder, containing the necessary dll files to the %PATH%, before starting python. This could be achieved with setenv from withing Matlab. Usually, the Library\bin is what should be added.
It might be a good idea to try clean officially-supported CPython distribution (e.g. CPython 3.6.8 ). Only install numpy (python -m pip install numpy). To my experience, the setenv is not necessary in this case.
For me, OutOfProcess mode proved to be buggy. Thus, I'd recommend to explicitly setting InProcess mode (for versions before R2019b, the OutOfProcess option is not present, as well as pyenv).
Do not concatenate the two .m files above into one - the py.importlib statements seem to be pre-executed and thus conflict with pyenv.

Smartsheet Python SDK Copy Workspace Fails

I am trying to copy a workspace to get around the 100 object limit.
Here's my code:
def rg_copy_workspace(workspace_id, new_ws_name, api_token, debug=False):
import smartsheet
smartsheet = smartsheet.Smartsheet(api_token)
smartsheet.errors_as_exceptions(True)
new_workspace = smartsheet.Workspaces.copy_workspace(
workspace_id,
smartsheet.models.ContainerDestination({
'new_name': new_ws_name
})
)
just like the example in the Python SDK.
I am testing on a workspace with a small number of objects (I started with only one Sheet)
I'm getting an error on the folder_obj. I have tried it with and without a folder, and when I have a folder with and without contents.
rg_copy_workspace(workspace_id, new_ws_name)
Traceback (most recent call last):
File "", line 1, in
rg_copy_workspace(workspace_id, new_ws_name)
File "", line 15, in rg_copy_workspace
'new_name': new_ws_name
File "(path-deleted)\workspaces.py", line 80, in copy_workspace
folder_obj = Folder({
File "(path-deleted)\smartsheet.py", line 210, in request
"""
File "(path-deleted)\smartsheet.py", line 278, in request_with_retry
if 200 <= response.status_code <= 299:
File "(path-deleted)\smartsheet.py", line 244, in _request
native = res.native(expected)
UnexpectedRequestError: (, None)
What am I doing wrong? I don't know how the code makes it to line 80 of workspaces.py.
I updated to latest version of SDK this morning (after receiving the error)
Craig
Reputation won't let me comment.
Your code seemed to execute fine for me on the updated 1.3 SDK.
The traceback locations look to lineup with sources from roughly a year ago, but linecache is pulling from the new source to build the traceback (smartsheet.py, line 210 is actually in a comment, so it's definitely not right). I'm not sure what all the situations are that could account for this but I'd guess there are compiled bytecode (.pyc) files somewhere that are stale.
Can you share a DEBUG level log near the relevant failure so that I can see what the API request looks like?

make packages installed in virtualenv visibile to sphinx

I am using sphinx to document my software. and I am using a virtualenv for the installation. now some packages are only installed in the virtual environment, and sphinx does not see them.
I have this code in my conf.py:
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
p = os.path.abspath('..')
sys.path.insert(0, p)
if 'VIRTUAL_ENV' in os.environ:
q = os.sep.join([os.environ['VIRTUAL_ENV'],
'lib', 'python2.7', 'site-packages'])
sys.path.insert(0, q)
p = p + ":" + q
os.environ['PYTHONPATH'] = p
yet if I make html, I get this sort of warnings:
/home/mario/Local/github/Bauble/bauble.classic/doc/api.rst:358: WARNING: autodoc: failed to import class u'TagItemGUI' from module u'bauble.plugins.tag'; the following exception was raised:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/sphinx/ext/autodoc.py", line 385, in import_object
__import__(self.modname)
File "/home/mario/Local/github/Bauble/bauble.classic/bauble/plugins/tag/__init__.py", line 30, in <module>
from sqlalchemy import *
ImportError: No module named sqlalchemy
my $VIRTUAL_ENV/lib/python2.7/site-packages contains SQLAlchemy-1.0.4-py2.7-linux-x86_64.egg.
definitely related to question Sphinx autodoc dies on ImportError of third party package, but the description of the procedure I chose to follow is in a broken link.
The problem is that packages are not directly included in virtualenv's site-packages dir, you would need to specify the full path to be able to import package from there. I use the following hack:
if 'VIRTUAL_ENV' in os.environ:
site_packages_glob = os.sep.join([
os.environ['VIRTUAL_ENV'],
'lib', 'python2.7', 'site-packages', 'projectname-*py2.7.egg'])
site_packages = glob.glob(site_packages_glob)[-1]
sys.path.insert(0, site_packages)
Where projectname is the name of the python module I would like to import.
Note that this is error prone, especially when you have multiple versions
of the module, but so far it works for me.

django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet. tastypie

Continuing my search for an answer to get oauth2.0 to work on pythonanywhere.
i am following this tutorial: http://ianalexandr.com/blog/building-a-true-oauth-20-api-with-django-and-tasty-pie.html
im using django 1.6 : https://www.pythonanywhere.com/wiki/VirtualEnvForNewerDjango
when i get to this line of codes:
from provider.oauth2.models import Client
# from django.contrib.auth.models import User
from django.contrib.auth import get_user_model
User = get_user_model()
u = User.objects.get(id=1)
c = Client(user=u, name="mysite client", client_type=1, url="http://pythonx00x.pythonanywhere.com")
c.save()
c.client_id
'd63f53a7a6cceba04db5'
c.client_secret
'afe899288b9ac4127d57f2f12ac5a49d839364dc'
it seems that i got an error at line:
User = get_user_model()
and it raise an error:
raise AppRegistryNotReady("Models aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
here is the full stack trace:
Traceback (most recent call last):
File "addClient.py", line 9, in <module>
User = get_user_model()
File "/home/python2006/.virtualenvs/django16/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 136, in get_user_model
return django_apps.get_model(settings.AUTH_USER_MODEL)
File "/home/python2006/.virtualenvs/django16/local/lib/python2.7/site-packages/django/apps/registry.py", line 200, in get_model
self.check_models_ready()
File "/home/python2006/.virtualenvs/django16/local/lib/python2.7/site-packages/django/apps/registry.py", line 132, in check_models_ready
raise AppRegistryNotReady("Models aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
I can't seem to find out how to get the models load.
if I'm getting the idea right.
I think you may not be using the version of Django that you think you are. AppRegistryNotReady was introduced in Django 1.7. I would guess that, if you pinned your DJango version to 1.6, your code would work.

Bottle web framework: any way to run a console/shell and get it to work with Werkzeug?

I searched but couldn't find an easy way to run a console or shell akin to Django's manage.py shell or Rail's rails console
Since I just started using Bottle for an existing project, I just wanted to play around with the existing models and managers in the console. The closest I came up with was using ipdb's set_trace() and go from there, but that's not ideal by any means.
Also, I tried integrating Bottle with Werkzeug, but when I follow the instructions:
import bottle
app = bottle.Bottle()
werkzeug = bottle.ext.werkzeug.Plugin()
app.install(werkzeug)
I get the following traceback error:
Traceback (most recent call last):
File "mysite.py", line 62, in <module>
werkzeug = bottle.ext.werkzeug.Plugin()
AttributeError: 'module' object has no attribute 'werkzeug'
Try adding importing bottle.ext.werkzeug by adding this at the beginning of your source:
import bottle.ext.werkzeug