IBM Language Translator returns 403 Forbidden upon identify() - ibm-cloud

I followed the official documentation to create a multilingual Watson assistant outlined here:
https://github.com/with-watson/multilingual-chatbot
However, after deploying the function on IBM Cloud and testing the deployed function via IBM Cloud CLI with the below command, I am getting an error (logs below):
bx wsk action invoke translator --result --param text "Hallo, ich habe eine Frage."
{
"error": "The action did not return a dictionary."
}
"2020-01-13T12:54:57.787506Z stderr: Traceback (most recent call last):",
"2020-01-13T12:54:57.787554Z stderr: File \"pythonrunner.py\", line 88, in run",
"2020-01-13T12:54:57.787560Z stderr: exec('fun = %s(param)' % self.mainFn, self.global_context)",
"2020-01-13T12:54:57.787564Z stderr: File \"<string>\", line 1, in <module>",
"2020-01-13T12:54:57.787568Z stderr: File \"__main__.py\", line 98, in main",
"2020-01-13T12:54:57.787571Z stderr: response = translator.identify( text )",
"2020-01-13T12:54:57.787575Z stderr: File \"/action/virtualenv/lib/python3.6/site-packages/watson_developer_cloud/language_translator_v3.py\", line 193, in identify",
"2020-01-13T12:54:57.787579Z stderr: accept_json=True)",
"2020-01-13T12:54:57.787583Z stderr: File \"/action/virtualenv/lib/python3.6/site-packages/watson_developer_cloud/watson_service.py\", line 587, in request",
"2020-01-13T12:54:57.787587Z stderr: info=error_info, httpResponse=response)",
"2020-01-13T12:54:57.787591Z stderr: watson_developer_cloud.watson_service.WatsonApiException: Error: Forbidden, Code: 403",
"2020-01-13T12:54:57.788Z stderr: The action did not initialize or run as expected. Log data might be missing."
Looks like the API key is recognized but not permitted to be used for this action, however the key being used does return the right values when used via cURL.
The code executed in main is the same as provided on the Github above, I did not make any changes.
Any ideas on how to fix this issue? Thanks!

The key string used by curl is a bearer token. The API key needed by the cloud function is probably one provided by Identity and Access Management, IAM.
In the https://cloud.ibm.com console GUI in the top click Manage > Access (IAM) then select the IBM Cloud API keys on the left and select an API key. This creates an API key that represents you, just like login name and credentials. This is the simplest way to get this to work, but is not great for production.
For production consider using a Service ID and probably in combination with Access Group.

Here's what worked for me with additional changes
I have run the below command to update the packages mentioned in the environment.yml file
conda update --all
The conda version on my machine is 4.8.1
cloud-functions/wsk/functions/fn plugin version is 1.0.36

While creating Language Translator instance make sure to choose the right region.
It worked for me after I changed it.

Related

flutter_stripe example app has 18 errors all "Object is of type 'unknown'." for 'error' variables

I'm trying to run flutter_stripe's example app. I forked and cloned the Github repository to my laptop.
Starting the yarn server results in 18 errors. All start with Object is of type 'unknown'. All are error or e or err, on lines 130, 301, 442, 450, 451, 455, 456, 464, 578, 586, 587, 591, 592, 595, 599, and 600. Then it says Command failed with exit code 2.
Is this a null safety issue? How do I fix it?
Your existing github issue with the library maintainers is likely to be your best source of help, however reading that I noticed you said:
In the last step, setting up server/.env, my Stripe account has pk_test and a pk_live Publishable and Secret Keys. My guess is that I should use the pk_test keys in server/.env.example. Let’s make this clear in the comment at the top of server/.env.example.
This seems to be a misunderstanding of your Stripe API keys. There are secret keys (sk_) for your server and publishable keys (pk_) for your client-side application as a matching pair, and there is a pair for each of live and test mode. You need to use a matching secret and publishable key from your dashboard.
Additionally, when setting up secrets in environment files, you'll typically be creating a .env file in the server/repo root directory. I read the above as though you might be trying to set up your keys in the .env.example file which I don't expect would work. You should check with the developer of the library/example about this if .env doesn't work.

Unable to authenticate using google cloud service account key created by python API

The sample below demonstrates failure to authenticate to google service account using the key created just the few lines above using python api.
I was not able to find any document on how these, programmatic keys, can be used.
The keys created by clicking thru console UI are working just fine.
However, for our use case, we need to create the keys using programmatic ways.
There is unanswered issue at github as well: https://github.com/googleapis/google-cloud-python/issues/7824
logger.info("Created new service account: {}".format(ret))
logger.info("Getting the new service account key")
request=iam.projects().serviceAccounts().keys().create(name=ret['name'],
body={'privateKeyType':'TYPE_GOOGLE_CREDENTIALS_FILE'})
key=request.execute()
>>>print json.dumps(key, indent=4) #just to verify what we got
{
"keyOrigin": "GOOGLE_PROVIDED",
"name": "goodandvalidname",
"validBeforeTime": "2029-06-28T15:09:59Z",
"privateKeyData": "datadata",
"privateKeyType": "TYPE_GOOGLE_CREDENTIALS_FILE",
"keyAlgorithm": "KEY_ALG_RSA_2048",
"validAfterTime": "2019-07-01T15:09:59Z"
}
>>> credentials = google.oauth2.service_account.Credentials.from_service_account_info(key)
Traceback (most recent call last):
File "/home/user/.p2/pool/plugins/org.python.pydev.core_7.2.1.201904261721/pysrc/_pydevd_bundle/pydevd_exec.py", line 3, in Exec
exec exp in global_vars, local_vars
File "<console>", line 1, in <module>
File "/home/user/.local/lib/python2.7/site-packages/google/oauth2/service_account.py", line 193, in from_service_account_info
info, require=['client_email', 'token_uri'])
File "/home/user/.local/lib/python2.7/site-packages/google/auth/_service_account_info.py", line 51, in from_dict
'fields {}.'.format(', '.join(missing)))
ValueError: Service account info was not in the expected format, missing fields token_uri, client_email.
Any help appreciated.
Answering my own issue and probably helping others...
The 'key' we get from python APIs is NOT the 'json key' as obtained from gcloud. The dict we get from iam.projects().serviceAccounts().keys().create() contains the field privateKeyData which itself contains ENTIRE 'json key' one needs to authenticate to google cloud.
The data in this field is base64 encoded and needs decoding, and subsequently dumping to json. Below is the snippet from functional code, demonstrating the credentials are loaded back from such key:
request=iam.projects().serviceAccounts().keys().create(name=ret['name'],
body={'privateKeyType':'TYPE_GOOGLE_CREDENTIALS_FILE'})
key=request.execute()
key=base64.decodestring(key['privateKeyData'])
key=json.loads(key)
credentials = google.oauth2.service_account.Credentials.from_service_account_info(key)
I figured this out by stepping thru gcloud service account key creating, line by line, using python debugger. Hope this helps others.

setting dropbox as custom backp on whm/cpanel error

I am trying to set dropbox as custom backup destination following below cpanel blog. The connection is working, but the backup files are not being transferred to DropBox. And when I press validate to custom backup destination it gives following error .
https://blog.cpanel.com/cpanel-whm-custom-backup-transport-example-dropbox/
Error: Validation for transport “dropbox” failed: Could not list files in
destination: Executed /usr/local/bin/backup_transport_dropbox.pl ls /
remotehost remoteuser : 2018-08-26T15:54:21 [WebService::Dropbox] [ERROR]
https://api.dropboxapi.com/2/files/list_folder {"path":"/"} -> [400] Error in
call to API function "files/list_folder": request body: path: Specify the root
folder as an empty string rather than as "/". at
/usr/local/share/perl5/WebService/Dropbox.pm line 184.
I am new to dropbox api and have no idea of perl so could not figure out what is discusses on below links.
https://github.com/silexlabs/unifile/issues/77
The error message is correctly indicating that the Dropbox API expects the value "" when referring to the root alone. The code is instead sending the value "/". This looks like a bug in the code.
It looks like you've already opened an issue with the developer for this:
https://github.com/CpanelInc/backup-transport-dropbox/issues/3
They should update the code to use "" when referring to the root folder on Dropbox.

gcloud deploy function error code 3

I'm doing the tutorial of basic fulfillment and conversation setup of api.ai tutorial to make a bot for facebook messenger, and when I try to deploy the function with the command:
gcloud beta functions deploy testBot --stage-bucket testbot-e9bc4.appspot.com --trigger-http
(where 'testBot' is the name of the project and 'testbot-e9bc4.appspot.com' is the bucket_name, I thought..)
It return the following error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Source code size exceeds the limit
I've searched but not found any answer, I don't know where is the error.
this is the JS file that appear in the tutorial:
/
HTTP Cloud Function.
#param {Object} req Cloud Function request context.
#param {Object} res Cloud Function response context.
*/
exports.helloHttp = function helloHttp (req, res) {
response = "This is a sample response from your webhook!" //Default response from the webhook to show it's working
res.setHeader('Content-Type', 'application/json'); //Requires application/json MIME type
res.send(JSON.stringify({ "speech": response, "displayText": response
//"speech" is the spoken version of the response, "displayText" is the visual version
}));
};
Make sure you are in the correct directory where your go code resides before executing gcloud beta functions deploy testBot --stage-bucket testbot-e9bc4.appspot.com --trigger-http command.
I was working in the express project. In my case, I mistakenly installed the #google/storage package in devDependencies instead of dependencies. I unable to notice as I tested the project in debug mode using mocha. So in debug, it is able to find that package in devDependencies but on deploy function, it tries to find in dependencies in package.json but unable to find there.
Open command prompt at the location where index.js was created and run the above gcloud command.

Error while configuring Google App Engine on Pydev in Eclipse

i am trying to configure Google App Engine on Eclipse and use it to run a python application locally (on the local host):
for this i used following tutorial as a guide:
http://www.mkyong.com/google-app-engine/google-app-engine-python-hello-world-example-using-eclipse/
i followed the steps properly but when i try to use the configuration i get errors the console output is:
Console Output:
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
WARNING 2012-06-20 14:53:01,451 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 126, in
run_file(file, globals())
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 122, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 694, in
sys.exit(main(sys.argv))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 582, in main
root_path, {}, default_partition=default_partition)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3142, in LoadAppConfig
raise AppConfigNotFoundError
google.appengine.tools.dev_appserver.AppConfigNotFoundError
The configuration i am using is:
Windows 7 64 bit
python 2.7
Eclipse Helios
What could be the possible mistake in configuring the GAE?
Additional info : when i try to use the project with GAE manually(ie by using the launcher) it works
Update:
i experimented and discovered that since the workstation and the python installation folder is not in the same directory i get these errors
got the hint from here:
File
"C:\Program Files(x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py"
line 582, in main
root_path, {}, default_partition=default_partition)
but when i made another workspace in the same partition i got this as console output and the the local host is still not working
output
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
WARNING 2012-06-20 17:20:56,719 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Runs a development application server for an application.
dev_appserver.py [options]
Application root must be the path to the application to run in this server.
Must contain a valid app.yaml or app.yml file.
Options:
--address=ADDRESS, -a ADDRESS Address to which this server should bind(Defaultlocalhost).
--clear_datastore, -c Clear the Datastore on startup. (Default false)
--debug, -d Use debug logging. (Default false)
--help, -h View this helpful message.
--port=PORT, -p PORT Port for the server to run on. (Default 8080)
--allow_skipped_files Allow access to files matched by app.yaml's
skipped_files (default False)
--auth_domain Authorization domain that this app runs in.
(Default gmail.com)
--backends Run the dev_appserver with backends support
(multiprocess mode).
--blobstore_path=DIR Path to directory to use for storing Blobstore
file stub data.
--clear_prospective_search Clear the Prospective Search subscription index
(Default false).
--datastore_path=DS_FILE Path to file to use for storing Datastore file
stub data.
(Defaultc:\users\anukoo~1\appdata\local\temp\dev_appserver.datastore)
--debug_imports Enables debug logging for module imports, showing
search paths used for finding modules and any
errors encountered during the import process.
--default_partition Default partition to use in the APPLICATION_ID.
(Default dev)
--disable_static_caching Never allow the browser to cache static files.
(Default enable if expiration set in app.yaml)
--disable_task_running When supplied, tasks will not be automatically
run after submission and must be run manually
in the local admin console.
--enable_sendmail Enable sendmail when SMTP not configured.
(Default false)
--high_replication Use the high replication datastore consistency
model. (Default false).
--history_path=PATH Path to use for storing Datastore history.
(Default c:\users\anukoo~1\appdata\local\temp\dev_appserver.datastore.history)
--multiprocess_min_port When running in multiprocess mode, specifies the
lowest port value to use when choosing ports. If
set to 0, select random ports.
(Default 9000)
--mysql_host=HOSTNAME MySQL database host.
Used by the Cloud SQL (rdbms) stub.
(Default 'localhost')
--mysql_port=PORT MySQL port to connect to.
Used by the Cloud SQL (rdbms) stub.
(Default 3306)
--mysql_user=USER MySQL user to connect as.
Used by the Cloud SQL (rdbms) stub.
(Default )
--mysql_password=PASSWORD MySQL password to use.
Used by the Cloud SQL (rdbms) stub.
(Default '')
--mysql_socket=PATH MySQL Unix socket file path.
Used by the Cloud SQL (rdbms) stub.
(Default '')
--persist_logs Enables storage of all request and application
logs to enable later access. (Default false).
--require_indexes Disallows queries that require composite indexes
not defined in index.yaml.
--show_mail_body Log the body of emails in mail stub.
(Default false)
--skip_sdk_update_check Skip checking for SDK updates. If false, fall back
to opt_in setting specified in .appcfg_nag
(Default false)
--smtp_host=HOSTNAME SMTP host to send test mail to. Leaving this
unset will disable SMTP mail sending.
(Default '')
--smtp_port=PORT SMTP port to send test mail to.
(Default 25)
--smtp_user=USER SMTP user to connect as. Stub will only attempt
to login if this field is non-empty.
(Default '').
--smtp_password=PASSWORD Password for SMTP server.
(Default '')
--task_retry_seconds How long to wait in seconds before retrying a
task after it fails during execution.
(Default '30')
--use_sqlite Use the new, SQLite based datastore stub.
(Default false)
Invalid arguments
seems like the arguments to dev_appserver.py are incorrect any ideas
If you did the same mistake as mine,then its a very high chance that you have space in the name of your directory
The warnings about deprecated things in search can be ignored. As can the message about the rdbms API if you don't plan on using that.
AppConfigNotFoundError happens when there is no app.yaml in the directory passed to dev_appserver.py. If you followed those instructions, then your app.yaml would be in the 'src' directory, and the 'program arguments' in the build command would be ${project_loc}/src - is this the case? When you run from the command-line, and see it work, what command are you running, and from what location?