google-cloud-storage library from nosetests using testbed - google-app-engine-python

I have google-cloud-storage pip installed into a lib directory and vendored in. It's running just fine locally during development of my python appengine app. However, when trying to run unit tests via nose and testbed I'm getting "The 'google-cloud-core' distribution was not found and is required by the application". Here is the stack:
Traceback (most recent call last):
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/Users/jason/dev/gain-data/data/storage/__init__.py", line 4, in <module>
from google.cloud.storage import Blob, Client
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/__init__.py", line 42, in <module>
from google.cloud.storage.batch import Batch
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/batch.py", line 30, in <module>
from google.cloud.storage.connection import Connection
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/connection.py", line 17, in <module>
from google.cloud import connection as base_connection
File "/Users/jason/dev/gain-data/lib/google/cloud/connection.py", line 31, in <module>
get_distribution('google-cloud-core').version)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 557, in get_distribution
dist = get_provider(dist)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 431, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 968, in require
needed = self.resolve(parse_requirements(requirements))
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 854, in resolve
raise DistributionNotFound(req, requirers)
DistributionNotFound: The 'google-cloud-core' distribution was not found and is required by the application
Any thoughts?

I had the same issue with google-cloud-translate, I was forced to also install the package "globally", i.e. pip install google-cloud-translate.

After struggling a lot with this same issue I found out that the error was because the vendor pip lib wasn't in the PYTHONPATH before calling the nosetests.
Try adding the vendor lib to the PYTHONPATH and then run the tests.
export PYTHONPATH="$(HOME)/Projects/myproject/pip_lib:$$PYTHONPATH"; \
nosetests .

Related

Running canopy-script.pyw gives traceback "No module named canopy.app.bootstrap"

I am trying to follow instruction from Enthought support website To test of what I have done i run line:
_python.exe canopy-script.pyw -d
Unfortunately this gives Traceback:
Traceback (most recent call last):
File "canopy-script.pyw", line 776, in <module>
File "canopy-script.pyw", line 336, in bootstrap
File "canopy-script.pyw", line 363, in chainload
File "canopy-script.pyw", line 762, in _chainload
File "C:\Users\User\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.7.4.3348.win-x86_64\Canopy-script.pyw", line 7, in <module>
from canopy.app.bootstrap import main
ImportError: No module named canopy.app.bootstrap
After "Search" I can see many of bootstrap.py files on the disc.
What would be the solution for this problem?
The current version of Canopy is 2.1.9. (You are running v 1.7.4).
To update Canopy, and for a link to release notes, please see "Installing a new Canopy version".
After you update, see:
https://support.enthought.com/hc/en-us/articles/360021798791--UnresolvableRequirements-or-Conflicting-requirements-when-installing-or-updating-packages

How to use the ibm_boto3 in python

Who has the same problem?
I want to store data in cos, but cannot use the ibm_boto3 on my machine.
To be sure to check with a sample, I used the code from the sample from this ibm-cos-sdk github.
Installed
pip3 freeze
backports.functools-lru-cache==1.5
botocore==1.12.28
docutils==0.14
futures==3.1.1
ibm-cos-sdk==2.3.2
ibm-cos-sdk-core==2.3.2
ibm-cos-sdk-s3transfer==2.3.2
-e git://github.com/boto/jmespath.git#1c9c35cf681b6605d8629e5ce8865221a4fd2a30#egg=jmespath
mock==1.3.0
nose==1.3.3
pbr==5.0.0
python-dateutil==2.7.3
s3transfer==0.1.13
six==1.11.0
urllib3==1.23
Here is my cli result and as you can see the ibm_boto3 is not found.
python3 test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import ibm_boto3
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_boto3/__init__.py", line 16, in <module>
from ibm_boto3.session import Session
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_boto3/session.py", line 27, in <module>
import ibm_botocore.session
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_botocore/session.py", line 37, in <module>
import ibm_botocore.credentials
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_botocore/credentials.py", line 36, in <module>
import requests
ModuleNotFoundError: No module named 'requests'
Yeah, it looks like requests somehow fell out of the requirements file in the latest release. The team is patching it and will release an update soon.
In the meantime, you can manually install the package in your environment with pip3 install requests or by manually adding it to the requirements.txt file:
echo "requests==2.18.0" >> path/to/requirements.txt

Google Cloud SDK installation - gsutil error

I have installed Google Cloud SDK but having an issue running "gsutil". Here are the error I'm getting:
~/gcloud/google-cloud-sdk#> gsutil
Traceback (most recent call last):
File "/Users/gonyi/Desktop/gonyyi/gcloud/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 13, in <module>
import bootstrapping
File "/Users/gonyi/Desktop/gonyyi/gcloud/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 40, in <module>
from googlecloudsdk.core import execution_utils
File "/Users/gonyi/Desktop/gonyyi/gcloud/google-cloud-sdk/lib/googlecloudsdk/core/execution_utils.py", line 33, in <module>
from googlecloudsdk.core import log
File "/Users/gonyi/Desktop/gonyyi/gcloud/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 810, in <module>
_log_manager = _LogManager()
File "/Users/gonyi/Desktop/gonyyi/gcloud/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 526, in __init__
self._file_formatter = _LogFileFormatter()
File "/Users/gonyi/Desktop/gonyyi/gcloud/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 315, in __init__
super(_LogFileFormatter, self).__init__(fmt=_LogFileFormatter.FORMAT)
TypeError: must be type, not classobj
~/gcloud/google-cloud-sdk#>
I have Python 2.7 installed, also tried to install this with "brew cask" as well as download it directly from Google Cloud site and install, but no luck either.
I googled this error but it seems like I'm the only one with this error..
"gcloud" command works just fine; but it's just "gsutil" that's not working.
Thank you

Issue running psycopg2 inside AWS Lambda Function

I'm getting the following error when trying to run psycopg2 in a AWS Lambda:
/var/task/functions/../vendored/psycopg2/_psycopg.so: ELF file's phentsize not the expected size: ImportError
Traceback (most recent call last):
File "/var/task/functions/refresh_mv.py", line 64, in execute
session = SessionFactoryGraphQL.get_session(app=item['app'])
File "/var/task/lib/session_factory.py", line 22, in get_session
engine = create_engine(conn_string, poolclass=NullPool)
File "/var/task/functions/../vendored/sqlalchemy/engine/__init__.py", line 387, in create_engine
return strategy.create(*args, **kwargs)
File "/var/task/functions/../vendored/sqlalchemy/engine/strategies.py", line 80, in create
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/var/task/functions/../vendored/sqlalchemy/dialects/postgresql/psycopg2.py", line 554, in dbapi
import psycopg2
File "/var/task/functions/../vendored/psycopg2/__init__.py", line 50, in <module>
from psycopg2._psycopg import ( # noqa
ImportError: /var/task/functions/../vendored/psycopg2/_psycopg.so: ELF file's phentsize not the expected size
The weird thing is: everything was working fine until yesterday (for more than 5 months), and suddenly stopped working. None of the libraries has been updated.
I tried to build from scratch, as in https://github.com/jkehler/awslambda-psycopg2, but still having the same error.
Can someone help me with it?
The problem is in the latest version of serverless framework. I assume that you are using serverless to deploy your lambda function.
serverless remove
npm install serverless#1.20.2 -g
This should work.

Openerp 7.0 server not getting started

I have downloaded fresh copy of the 7.0 branch of odoo from the github, I have changed all the necessary things like user, password, port and I tried to start the server but getting this error,
2015-03-12 06:47:50,735 7260 CRITICAL ? openerp.modules.module: Couldn't load module web
2015-03-12 06:47:50,736 7260 CRITICAL ? openerp.modules.module: cannot import name models
2015-03-12 06:47:50,736 7260 ERROR ? openerp.service: Failed to load server-wide module `web`.
The `web` module is provided by the addons found in the `openerp-web` project.
Maybe you forgot to add those addons in your addons_path configuration.
Traceback (most recent call last):
File "/home/viraj/workspace/v7_development/odoo_v7/openerp/service/__init__.py", line 60, in load_server_wide_modules
openerp.modules.module.load_openerp_module(m)
File "/home/viraj/workspace/v7_development/odoo_v7/openerp/modules/module.py", line 415, in load_openerp_module
getattr(sys.modules['openerp.addons.' + module_name], info['post_load'])()
File "/home/viraj/workspace/v7_development/odoo_v7/addons/web/http.py", line 628, in wsgi_postload
openerp.wsgi.register_wsgi_handler(Root())
File "/home/viraj/workspace/v7_development/odoo_v7/addons/web/http.py", line 517, in __init__
self.load_addons()
File "/home/viraj/workspace/v7_development/odoo_v7/addons/web/http.py", line 580, in load_addons
m = __import__('openerp.addons.' + module)
File "/home/viraj/workspace/v7_development/odoo_v7/openerp/modules/module.py", line 133, in load_module
mod = imp.load_module('openerp.addons.' + module_part, f, path, descr)
File "/home/viraj/workspace/v7_development/emipro_addons/quotation_split/__init__.py", line 1, in <module>
import py
File "/home/viraj/workspace/v7_development/emipro_addons/quotation_split/py/__init__.py", line 1, in <module>
import sale_order
File "/home/viraj/workspace/v7_development/emipro_addons/quotation_split/py/sale_order.py", line 1, in <module>
from openerp import models, fields, api, _
ImportError: cannot import name models
If anyone know this please help me.
Thanks in advance.