Watson machine learning deployment takes too much time - ibm-cloud

I trained a model using watson machine learning service. The training process has completed so I ran these command lines to deploy it:
bx ml store training-runs model-XXXXXXX
I get the output with the model ID
Starting to store the training-run 'model-XXXXXX' ...
OK
Model store successful. Model-ID is '93sdsdsf05-3ea4-4d9e-a751-5bcfbsdsd3391'.
Then I use the following to deploy it :
bx ml deploy 93sdsdsf05-3ea4-4d9e-a751-5bcfbsdsd3391 "my-test-model"
The problem is that I'm getting an endless message saying:
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
When I check in COS result bucket the model size is ~25MB so it shouldn't be that long to deploy. Am I missing something here ?

Deploying the same model using Python Client API:
from watson_machine_learning_client import WatsonMachineLearningAPIClient
client = WatsonMachineLearningAPIClient(wml_credentials)
deployment_details = client.deployments.create( model_id, "model_name")
This showed me very quickly that there is an error with the deployment. The strange thing is that the error doesn't pop up when deploying with command line interface (CLI).

Related

Overriding Jmeter property in Run Taurus task Azure pipeline is not working

I am running jmeter from Taurus and I need a output kpi.jtl file with url listing.
I have tried passing parameter -o modules.jmeter.properties.save.saveservice.url='true' and
-o modules.jmeter.properties="{'jmeter.save.saveservice.url':'true'}". Pipeline is running successfully but the kpi.jtl doesn't have the url. Please help
I have tried few more options like editing jmeter.properties via pipeline - which broke the pipeline and expecting input from user
user.properities- Which is ineffective.
I am expecting kpi.jtl file with all the possible logs especially url.
I believe you're using the wrong property, you should pass the next one:
modules.jmeter.csv-jtl-flags.url=true
More information: CSV file content configuration
However be informed that having a.jtl file "with all possible logs" is some form of a performance anti-pattern as it creates massive disk IO and may ruin your test. More information: 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

Building graph from OSM and GTFS data using OTP

I have a few questions regarding building graph with OTP (OpenTripPlanner) containing GTFS data and OSM data. I am using GTFS and OSM data for Berlin.
I read here that OTP version 2 does not support isochrone and surface analysis features. This is from November 2021. Has this changed since then?
Using the same GTFS and OSM data I am able to build the graph with OTP v2 but not with OTP v1.5. I get an error as shown below while building the graph with OTP v1.5. It seems a field is missing in pathways.txt file. However, before changing any of the GTFS files I would like to know if anyone has had this error and does adding a field will resolve the error?
Exception in thread "main" org.onebusaway.csv_entities.exceptions.CsvEntityIOException: io error: entityType=org.onebusaway.gtfs.model.Pathway path=pathways.txt lineNumber=2
Caused by: org.onebusaway.csv_entities.exceptions.MissingRequiredFieldException: missing required field: pathway_type
Command used to build the graph:
java -Xmx4G -jar otp-1.5.0-shaded.jar --build graphs/current --port 9090
I am also trying to start the OTP v1.5 server by loading an already build graph using the switch --load but, it throws error Unknown option --load. How can I load an already build graph to start the otp server in version 1.5?

AEM-REX-001-008: Unable to apply the requested usage rights to the given document

We have developed two interactive XDP which have some pre-population and binding of data and they are generated in interactive PDF. Whenever we deploy the XDP in our ETE environment everything is perfect and works fine. We have developed a rest API which generate the PDF and bind values from front end.
The problem is whenever we deploy the XDP in QA environment and try to consume and bind dynamic values to XDP and generate the same PDF documents consuming the same rest API we get failure in generating the documents. I check the error logs of AEM instance and I am getting below. Please can somebody help me out here as we are not able to find what is the root cause of this failure specific to QA Environment.
09.07.2019 16:53:13.307 *ERROR* [10.52.160.35 [1562683992994] POST /content/AemFormsSamples/renderpdfform.html HTTP/1.1] com.adobe.fd.readerextensions.service.impl.ReaderExtensionsServiceImpl AEM-REX-001-008: Unable to apply the requested usage rights to the given document.
java.lang.NullPointerException: null
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7242)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)

How to delete training definitions?

I am trying out Watson Studio visual modeler for neural networks. During learning I have tried a few different designs and I have published several training definitions.
If I navigate to Experiment Builder, I see a lot of definitions some are old and no longer needed.
How can I delete old training definitions? (Ideally from the Watson Studio UI)
The Watson Machine Learning python client doesn't support deleting training run definitions. WML's python client API shows what options are supported. The WML team is working to add such delete functionality though.
In the meantime, you can use WML's CLI tool to execute bx ml delete:
NAME:
delete - Delete a model/deployment/training-run/training-definitions/experiments/experiment-runs
USAGE:
bx ml delete models MODEL-ID
bx ml delete deployments MODEL-ID DEPLOYMENT-ID
bx ml delete training-runs TRAINING-RUN-ID
bx ml delete training-definitions TRAINING-DEFINITION-ID
bx ml delete experiments EXPERIMENT-ID
bx ml delete experiment-runs EXPERIMENT-ID EXPERIMENT-RUN-ID
Use bx ml list to get details on the items that you wish to delete:
Actually, the python client supports deleting training definitions.
You just call client.repository.delete(artifact_uid). The same method can be used to delete any item from repository (model, training_definition, experiment). It is documented in python client docs btw:
delete(artifact_uid)
Delete model, definition or experiment from repository.
Parameters: artifact_uid ({str_type}) – stored model, definition, or experiment UID
A way you might use me is:
>>> client.repository.delete(artifact_uid)
Training_run is completely different thing than training_definition.
You can also remove it if needed:
delete(run_uid)
Delete training run.
Parameters: run_uid ({str_type}) – ID of trained model
A way you might use me is:
>>> client.training.delete(run_uid)
You can also remove the experiment_run if needed by calling:
delete(experiment_run_uid)
Delete experiment run.
Parameters: experiment_run_uid ({str_type}) – experiment run UID
A way you might use me is
>>> client.experiments.delete(experiment_run_uid)
Please refer to python client docs for more details: http://wml-api-pyclient-dev.mybluemix.net/

Google Vision API - tatusCode.RESOURCE_EXHAUSTED

I am new to the Google Vision API and I would like to conduct a label detection of approx. 10 images and I would like to run the vision quickstart.py file. However when I do this with only 3 images then it is successful. With more than 3 images I am getting the error message below. I know that I would need to change something at my setup, but I do not know what I should change.
Here is my error message:
google.gax.errors.RetryError: GaxError(Exception occurred in retry method
that was not classified as transient, caused by <_Rendezvous of RPC that
terminated with (StatusCode.RESOURCE_EXHAUSTED, Insufficient tokens for
quota 'DefaultGroup' and limit 'USER-100s' of service
'vision.googleapis.com' for consumer 'project_number: XXX'.)>)
Does anybody know what I need to do?
Any help would be much appreciated
Cheers,
Andi
I ran into the same problem and fixed it with these steps:
Make sure you have the Google Cloud SDK properly installed: https://cloud.google.com/vision/docs/reference/libraries
Setup a Service Account in the Google Cloud backend: https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount
Create a Service Account Key and download it as a JSON file to a local folder. You need to keep the key private.
Export the filepath to the key-file as an environment variable: gcloud auth activate-service-account --key-file path/to/your/keyfile/here
Log out/in of the console.
Make sure, the environment variable is properly set with printenv
Try your py-script again...
Good luck...
Edit: In addition to the mentioned steps 1.-3. you can just do vision_client = vision.Client.from_service_account_json('/path/to/your/keyfile.json') in your script. No need for the env variable then.