cannot import LanguageTranslatorV3 in AWS EC2 - ibm-cloud

I am now trying to use the IBM Natural Language Translator in AWS EC2. However, I find I cannot import the LanguageTranslatorV3 in the AWS EC2, which can be done on my laptop. The error is shown below. Is there anyone that can help me to solve this problem? Thank you!
from watson_developer_cloud import LanguageTranslatorV3
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-62-0d0e9b329a15> in <module>()
----> 1 from watson_developer_cloud import LanguageTranslatorV3
ImportError: cannot import name 'LanguageTranslatorV3'

Looks like you are missing a dependency on watson-developer-cloud in your applications requirements.txt file.

Related

did any one able to import azure storage blob using pulumi. if so please let me know the command?

I am trying to import the azure storage blob state in to pulumi using PULUMI CLI.
tried below cmd
pulumi import --yes azure-native:storage:Blob testblob
it thrown error with below.
error: Preview failed: "resourceGroupName" not found in resource state
please let me know if any one is able to successfully import the azure storage blob resource in to pulumi.
thanks,
kumar
tried below cmd
pulumi import --yes azure-native:storage:Blob testblob
it thrown error with below.
error: Preview failed: "resourceGroupName" not found in resource state
expected result: to import successfully
actual result: import failed.
If you look at the docs for a Blob resource there's an import section (this section exists on all resources).
The actual command you'll need is:
pulumi import azure-native:storage:Blob myresource1 /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/blobServices/default/containers/{containerName}/blobs/{blobName}

PyMongo null bytes error on import

I'm just trying to import the PyMongo package using Python3 and get the following error:
from pymongo import MongoClient
File "C:\Anaconda3\lib\site-packages\pymongo__init__.py", line 83, in
ValueError: source code string cannot contain null bytes
I've tried updating, reinstalling (with pip), cloning from github etc with no joy.
Any suggestions gratefully received.
UPDATE: Must be a local config issue since pymongo working in virtualenv.

Why cannot I import 'pandas_udf' in Jupiter notebook?

I run the following code in Jupyter notebook, but get ImportError. Note that 'udf' can be imported in Jupyter.
from pyspark.sql.functions import pandas_udf
ImportError Traceback (most recent call
last) in ()
----> 1 from pyspark.sql.functions import pandas_udf
ImportError: cannot import name 'pandas_udf'
Anyone knows how to fix it? Thank you very much!
It looks like you start jupyter notebook by itself, rather than start pyspark with jupyter notebook, which is following command:
PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark
If your jupyter notebook server process are running from another machine, maybe you want to use this command to make it available to all IP addresses of your sever.
(NOTE: This could be a potential security issue if your server is on a public or untrusted network)
PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook --ip=0.0.0.0 " pyspark
I will revised my answer if the problem still persist after you start jupyter notebook like that.

Connection to pymongo

I am trying to connect to mongo in MAC using pymongo. I am getting the following error-
>>> from pymongo import MongoClient
Traceback (most recent call last):
File "", line 1, in
from pymongo import MongoClient
ImportError: cannot import name 'MongoClient'
I have tried Connection also. But it gives the same error. Any help?
Steps for troubleshooting:
First of all, verify if your environment is activated and you are in the correct environment.
If it is active and you're in the correct environment, then verify if you have installed pymongo.
If it's not installed in your environment, install it using pip install pymongo in the environment you want to work.

How to use read_gbq or other bq in IPython to access datasets hosted in BigQuery

I am using the iPython notebook to read the Google BigQuery public dataset for natality
I have done the installation for the google-api
easy_install --upgrade google-api-python-client.
However it still does not detect the installed API
Anyone has a iPython notebook to share on accessing the public dataset and loading it into a dataframe in iPython.
import pandas as pd
projectid = "xxxx"
data_frame = pd.read_gbq('SELECT * FROM xxxx', project_id = projectid)
303 if not _GOOGLE_API_CLIENT_INSTALLED:
--> 304 raise ImportError('Could not import Google API Client.')
305
306 if not _GOOGLE_FLAGS_INSTALLED:
ImportError: Could not import Google API Client
I have shared the iPython Notebook used at
http://nbviewer.ipython.org/urls/dl.dropbox.com/s/d77u2xarscagw0b/BigQuery_Trial8.ipynb?dl=0
Additional info:
I am running on a server with a docker instance used for the iPython server.
I have run the curl https://sdk.cloud.google.com | bash installation on the linux server
I have tried to run some of the shared notebooks
nbviewer.ipython.org/gist/fhoffa/6459195
or nbviewer.ipython.org/gist/fhoffa/6472099
However I also get
ImportError: No module named bq
I suspect it is a simple case of missing dependencies.
Anyone who has clues, help welcome
As I just said it here: https://stackoverflow.com/a/31708375/2533394
I solved the problem with this:
pip install --force-reinstall uritemplate.py
Make sure your Pandas is version 0.17 or higher:
pip install -U pandas
You can check with:
import pandas as pd
pd.__version__