Conflicting versions of celery and kombu while installing airflow - celery

I am installing airflow version 1.9.0 and getting the following error
Traceback (most recent call last):
....
bcpmrstbssapp1 airflow: from kombu.entity import Exchange, Queue
airflow: File "/opt/xxx/lib/airflow/kombu/entity.py", line 9, in <module>
airflow: from .serialization import prepare_accept_content
airflow: File "/opt/xxx/lib/airflow/kombu/serialization.py", line 456, in <module>
airflow: for ep, args in entrypoints('kombu.serializers'): # pragma: no cover
airflow: File "/opt/xxx/lib/airflow/kombu/utils/compat.py", line 89, in entrypoints
airflow: for ep in importlib_metadata.entry_points().get(namespace, [])
....
TypeError: can't intern subclass of string
I am using celery version 4.3
Should I select a specific version of kombu ?

Fix kombu version to 4.6.3
pip install kombu==4.6.3
Jira: https://issues.apache.org/jira/browse/AIRFLOW-5240

Related

Aws Cloudwatch Logs agent throws an error

I'm setting up awslogs agent on ec2 instance, When i run the python script of awslogs. I'm getting below message.
Downloading the latest CloudWatch Logs agent bits ... ERROR: Failed to create virtualenv. Try manually installing with pip and adding it to the sudo user's PATH before running this script.
And awslogs-agent-setup.log show below error.
Environment: CentOS 6.10 and Python 2.6
Traceback (most recent call last):
File "/usr/bin/pip", line 7, in <module>
from pip._internal import main
File "/usr/lib/python2.6/site-packages/pip-19.0.3-py2.6.egg/pip/_internal/__init__.py", line 19, in <module>
from pip._vendor.urllib3.exceptions import DependencyWarning
File "/usr/lib/python2.6/site-packages/pip-19.0.3-py2.6.egg/pip/_vendor/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/usr/lib/python2.6/site-packages/pip-19.0.3-py2.6.egg/pip/_vendor/urllib3/connectionpool.py", line 92
_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK}
^
SyntaxError: invalid syntax
/usr/bin/virtualenv
Traceback (most recent call last):
File "/usr/bin/virtualenv", line 7, in <module>
from virtualenv import main
File "/usr/lib/python2.6/site-packages/virtualenv.py", line 51, in <module>
print("ERROR: {}".format(sys.exc_info()[1]))
ValueError: zero length field name in format
Basically, this error is due to your python version 2.6. Could you please update your python version from 2.6 to 2.7 or 3.1.
This should help.

How to use the ibm_boto3 in python

Who has the same problem?
I want to store data in cos, but cannot use the ibm_boto3 on my machine.
To be sure to check with a sample, I used the code from the sample from this ibm-cos-sdk github.
Installed
pip3 freeze
backports.functools-lru-cache==1.5
botocore==1.12.28
docutils==0.14
futures==3.1.1
ibm-cos-sdk==2.3.2
ibm-cos-sdk-core==2.3.2
ibm-cos-sdk-s3transfer==2.3.2
-e git://github.com/boto/jmespath.git#1c9c35cf681b6605d8629e5ce8865221a4fd2a30#egg=jmespath
mock==1.3.0
nose==1.3.3
pbr==5.0.0
python-dateutil==2.7.3
s3transfer==0.1.13
six==1.11.0
urllib3==1.23
Here is my cli result and as you can see the ibm_boto3 is not found.
python3 test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import ibm_boto3
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_boto3/__init__.py", line 16, in <module>
from ibm_boto3.session import Session
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_boto3/session.py", line 27, in <module>
import ibm_botocore.session
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_botocore/session.py", line 37, in <module>
import ibm_botocore.credentials
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ibm_botocore/credentials.py", line 36, in <module>
import requests
ModuleNotFoundError: No module named 'requests'
Yeah, it looks like requests somehow fell out of the requirements file in the latest release. The team is patching it and will release an update soon.
In the meantime, you can manually install the package in your environment with pip3 install requests or by manually adding it to the requirements.txt file:
echo "requests==2.18.0" >> path/to/requirements.txt

Docker-compose ps error

I am new to docker-compose and getting the following error when I type in docker-compose ps.
Traceback (most recent call last):
File "/usr/local/bin/docker-compose", line 7, in <module>
from compose.cli.main import main
File "/Library/Python/2.7/site-packages/compose/cli/main.py", line 20, in <module>
from ..bundle import get_image_digests
File "/Library/Python/2.7/site-packages/compose/bundle.py", line 14, in <module>
from .service import format_environment
File "/Library/Python/2.7/site-packages/compose/service.py", line 37, in <module>
from .parallel import parallel_execute
File "/Library/Python/2.7/site-packages/compose/parallel.py", line 10, in <module>
from six.moves import _thread as thread
ImportError: cannot import name _thread
Docker-compose is written by python. It seems that you miss some python packages. You can refer to the following page of how to fix the python lib issue.
Matplotlib issue on OS X ("ImportError: cannot import name _thread")
Alternatively, you can try to install docker-compose as a container, the docker container contains appropriate environment for docker-compose.
https://docs.docker.com/compose/install/#install-as-a-container

google-cloud-storage library from nosetests using testbed

I have google-cloud-storage pip installed into a lib directory and vendored in. It's running just fine locally during development of my python appengine app. However, when trying to run unit tests via nose and testbed I'm getting "The 'google-cloud-core' distribution was not found and is required by the application". Here is the stack:
Traceback (most recent call last):
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/Users/jason/dev/gain-data/data/storage/__init__.py", line 4, in <module>
from google.cloud.storage import Blob, Client
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/__init__.py", line 42, in <module>
from google.cloud.storage.batch import Batch
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/batch.py", line 30, in <module>
from google.cloud.storage.connection import Connection
File "/Users/jason/dev/gain-data/lib/google/cloud/storage/connection.py", line 17, in <module>
from google.cloud import connection as base_connection
File "/Users/jason/dev/gain-data/lib/google/cloud/connection.py", line 31, in <module>
get_distribution('google-cloud-core').version)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 557, in get_distribution
dist = get_provider(dist)
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 431, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 968, in require
needed = self.resolve(parse_requirements(requirements))
File "/Users/jason/dev/gain-data/venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 854, in resolve
raise DistributionNotFound(req, requirers)
DistributionNotFound: The 'google-cloud-core' distribution was not found and is required by the application
Any thoughts?
I had the same issue with google-cloud-translate, I was forced to also install the package "globally", i.e. pip install google-cloud-translate.
After struggling a lot with this same issue I found out that the error was because the vendor pip lib wasn't in the PYTHONPATH before calling the nosetests.
Try adding the vendor lib to the PYTHONPATH and then run the tests.
export PYTHONPATH="$(HOME)/Projects/myproject/pip_lib:$$PYTHONPATH"; \
nosetests .

Trying to Install AWS CLI, stuck on a step

Im trying to install aws for the mac command line, I guess I dont understand what I need to do I installed the aws bundle with wget on the terminal, unziped it did everything, but when I need to configure my credentials nothing comes up when I put aws configure..
Here are the Instructions:
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html
Here is what is ouputed
an$ aws configuration
Traceback (most recent call last):
File "/usr/local/bin/aws", line 15, in <module>
import awscli.clidriver
File "/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py", line 31, in <module>
from awscli.help import ProviderHelpCommand
File "/usr/local/aws/lib/python2.7/site-packages/awscli/help.py", line 20, in <module>
from docutils.core import publish_string
File "/usr/local/aws/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/usr/local/aws/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/usr/local/aws/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
import docutils.io
File "/usr/local/aws/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/usr/local/aws/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/usr/local/aws/lib/python2.7/locale.py", line 511, in getdefaultlocale
return _parse_localename(localename)
File "/usr/local/aws/lib/python2.7/locale.py", line 443, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
Any Ideas_?
try adding below lines to ~/.bash_profile:
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
Installing AWSCLI in Windows Machine
I had similar issue with Windows 10 (64 bit). Python 3.5 and Python 2.7 are installed in my PC. I was getting ImportError: No module named awscli.clidriver.
Then I added %USERPROFILE%\AppData\Roaming\Python\Python35\Scripts in environment path variable and removed Python 2.7 from the environment path variable. Now I can successfully use awscli.
I have created a step by step AWSCLI installation guide in this Github repository: https://github.com/arsho/installation/tree/master/awscli_installation.
I had to install EKS supported version and I ended up getting it to work with ignoring six:
$ pip3 install awscli --ignore-installed six
In my case nothing worked, untill i gave more permissions, i run aws command with a non root user
chown amzadm.root /usr/bin/aws
chown amzadm.root -R /usr/lib/python2.6/site-packages/
chown amzadm.root -R /usr/lib/python2.6/site-packages/awscli/
I fixed this by adding a line to the 'aws' script just before the import (line 19). So now the file reads:
sys.path.append('/Users/<username>/.local/lib/aws/lib/python2.7/site-packages/')
import awscli.clidriver
This work for me. In /.barshrc file
export AWS_DEFAULT_OUTPUT="json"