#ckan PasteScript Error during CKan Migration from version 2.4 to 2.6 - upgrade

I'm upgrating from version 2.4 to version 2.6 using the documentation link
I'm getting the following paster error when executing the paster command
paster db upgrade -c /etc/ckan/default/production.ini
Traceback (most recent call last):
File "/usr/lib/ckan/default/bin/paster", line 7, in
from paste.script.command import run
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 54, in
dist = pkg_resources.get_distribution('PasteScript')
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg/pkg_resources.py", line 330, in get_distribution
if isinstance(dist,Requirement): dist = get_provider(dist)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg/pkg_resources.py", line 209, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg/pkg_resources.py", line 686, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg/pkg_resources.py", line 584, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: PasteScript
Any idea?

It looks like you didn't run the 'pip install' step (step 3)with an activated virtual environment. All these commands should be run with an activated virtual environment (step 1) so don't forget to do that again if you start a new terminal/shell.

Related

Running canopy-script.pyw gives traceback "No module named canopy.app.bootstrap"

I am trying to follow instruction from Enthought support website To test of what I have done i run line:
_python.exe canopy-script.pyw -d
Unfortunately this gives Traceback:
Traceback (most recent call last):
File "canopy-script.pyw", line 776, in <module>
File "canopy-script.pyw", line 336, in bootstrap
File "canopy-script.pyw", line 363, in chainload
File "canopy-script.pyw", line 762, in _chainload
File "C:\Users\User\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.7.4.3348.win-x86_64\Canopy-script.pyw", line 7, in <module>
from canopy.app.bootstrap import main
ImportError: No module named canopy.app.bootstrap
After "Search" I can see many of bootstrap.py files on the disc.
What would be the solution for this problem?
The current version of Canopy is 2.1.9. (You are running v 1.7.4).
To update Canopy, and for a link to release notes, please see "Installing a new Canopy version".
After you update, see:
https://support.enthought.com/hc/en-us/articles/360021798791--UnresolvableRequirements-or-Conflicting-requirements-when-installing-or-updating-packages

Aws Cloudwatch Logs agent throws an error

I'm setting up awslogs agent on ec2 instance, When i run the python script of awslogs. I'm getting below message.
Downloading the latest CloudWatch Logs agent bits ... ERROR: Failed to create virtualenv. Try manually installing with pip and adding it to the sudo user's PATH before running this script.
And awslogs-agent-setup.log show below error.
Environment: CentOS 6.10 and Python 2.6
Traceback (most recent call last):
File "/usr/bin/pip", line 7, in <module>
from pip._internal import main
File "/usr/lib/python2.6/site-packages/pip-19.0.3-py2.6.egg/pip/_internal/__init__.py", line 19, in <module>
from pip._vendor.urllib3.exceptions import DependencyWarning
File "/usr/lib/python2.6/site-packages/pip-19.0.3-py2.6.egg/pip/_vendor/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/usr/lib/python2.6/site-packages/pip-19.0.3-py2.6.egg/pip/_vendor/urllib3/connectionpool.py", line 92
_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK}
^
SyntaxError: invalid syntax
/usr/bin/virtualenv
Traceback (most recent call last):
File "/usr/bin/virtualenv", line 7, in <module>
from virtualenv import main
File "/usr/lib/python2.6/site-packages/virtualenv.py", line 51, in <module>
print("ERROR: {}".format(sys.exc_info()[1]))
ValueError: zero length field name in format
Basically, this error is due to your python version 2.6. Could you please update your python version from 2.6 to 2.7 or 3.1.
This should help.

Can't connect python with titan db

Following the steps to configure the titan srever
bin/titan.sh
Forking Cassandra...
Running `nodetool statusthrift`... OK (returned exit status 0 and printed string "running").
Forking Elasticsearch...
Connecting to Elasticsearch (127.0.0.1:9300)... OK (connected to 127.0.0.1:9300).
Forking Gremlin-Server...
Connecting to Gremlin-Server (127.0.0.1:8182)... OK (connected to 127.0.0.1:8182).
Run gremlin.sh to connect.
The server started perfectly but when i am connecting with python and then run the script the error which i got mentioned below
Traceback (most recent call last):
File "/home/admin-12/Documents/bitbucket/ecodrone/ecodrone/GremlinConnector.py", line 28, in <module>
data = (execute_query("""g.V()"""))
File "/home/admin-12/Documents/bitbucket/ecodrone/ecodrone/GremlinConnector.py", line 22, in execute_query
results = future_results.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/admin-12/.local/lib/python3.6/site-packages/gremlin_python/driver/resultset.py", line 81, in cb
f.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/admin-12/.local/lib/python3.6/site-packages/gremlin_python/driver/connection.py", line 77, in _receive
self._protocol.data_received(data, self._results)
File "/home/admin-12/.local/lib/python3.6/site-packages/gremlin_python/driver/protocol.py", line 71, in data_received
result_set = results_dict[request_id]
KeyError: None
versioning i am using
titan - 1.0.0
gremlin-python - 3.3.2
apache-tinkerpop-gremlin-server-3.3.1
Titan supports some an extremely old version of TinkerPop and I'm sure you'll find some incompatibility there if you try to use gremlin-python 3.3.2. As Titan is no longer supported, I suggest you upgrade to JanusGraph, a more current and maintained version of Titan.

Issue running psycopg2 inside AWS Lambda Function

I'm getting the following error when trying to run psycopg2 in a AWS Lambda:
/var/task/functions/../vendored/psycopg2/_psycopg.so: ELF file's phentsize not the expected size: ImportError
Traceback (most recent call last):
File "/var/task/functions/refresh_mv.py", line 64, in execute
session = SessionFactoryGraphQL.get_session(app=item['app'])
File "/var/task/lib/session_factory.py", line 22, in get_session
engine = create_engine(conn_string, poolclass=NullPool)
File "/var/task/functions/../vendored/sqlalchemy/engine/__init__.py", line 387, in create_engine
return strategy.create(*args, **kwargs)
File "/var/task/functions/../vendored/sqlalchemy/engine/strategies.py", line 80, in create
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/var/task/functions/../vendored/sqlalchemy/dialects/postgresql/psycopg2.py", line 554, in dbapi
import psycopg2
File "/var/task/functions/../vendored/psycopg2/__init__.py", line 50, in <module>
from psycopg2._psycopg import ( # noqa
ImportError: /var/task/functions/../vendored/psycopg2/_psycopg.so: ELF file's phentsize not the expected size
The weird thing is: everything was working fine until yesterday (for more than 5 months), and suddenly stopped working. None of the libraries has been updated.
I tried to build from scratch, as in https://github.com/jkehler/awslambda-psycopg2, but still having the same error.
Can someone help me with it?
The problem is in the latest version of serverless framework. I assume that you are using serverless to deploy your lambda function.
serverless remove
npm install serverless#1.20.2 -g
This should work.

google-cloud-storage library from nosetests using testbed

I have google-cloud-storage pip installed into a lib directory and vendored in. It's running just fine locally during development of my python appengine app. However, when trying to run unit tests via nose and testbed I'm getting "The 'google-cloud-core' distribution was not found and is required by the application". Here is the stack:
Traceback (most recent call last):
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/Users/jason/dev/gain-data/data/storage/__init__.py", line 4, in <module>
from google.cloud.storage import Blob, Client
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/__init__.py", line 42, in <module>
from google.cloud.storage.batch import Batch
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/batch.py", line 30, in <module>
from google.cloud.storage.connection import Connection
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/connection.py", line 17, in <module>
from google.cloud import connection as base_connection
File "/Users/jason/dev/gain-data/lib/google/cloud/connection.py", line 31, in <module>
get_distribution('google-cloud-core').version)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 557, in get_distribution
dist = get_provider(dist)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 431, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 968, in require
needed = self.resolve(parse_requirements(requirements))
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 854, in resolve
raise DistributionNotFound(req, requirers)
DistributionNotFound: The 'google-cloud-core' distribution was not found and is required by the application
Any thoughts?
I had the same issue with google-cloud-translate, I was forced to also install the package "globally", i.e. pip install google-cloud-translate.
After struggling a lot with this same issue I found out that the error was because the vendor pip lib wasn't in the PYTHONPATH before calling the nosetests.
Try adding the vendor lib to the PYTHONPATH and then run the tests.
export PYTHONPATH="$(HOME)/Projects/myproject/pip_lib:$$PYTHONPATH"; \
nosetests .