I am trying out Watson Studio visual modeler for neural networks. During learning I have tried a few different designs and I have published several training definitions.
If I navigate to Experiment Builder, I see a lot of definitions some are old and no longer needed.
How can I delete old training definitions? (Ideally from the Watson Studio UI)
The Watson Machine Learning python client doesn't support deleting training run definitions. WML's python client API shows what options are supported. The WML team is working to add such delete functionality though.
In the meantime, you can use WML's CLI tool to execute bx ml delete:
NAME:
delete - Delete a model/deployment/training-run/training-definitions/experiments/experiment-runs
USAGE:
bx ml delete models MODEL-ID
bx ml delete deployments MODEL-ID DEPLOYMENT-ID
bx ml delete training-runs TRAINING-RUN-ID
bx ml delete training-definitions TRAINING-DEFINITION-ID
bx ml delete experiments EXPERIMENT-ID
bx ml delete experiment-runs EXPERIMENT-ID EXPERIMENT-RUN-ID
Use bx ml list to get details on the items that you wish to delete:
Actually, the python client supports deleting training definitions.
You just call client.repository.delete(artifact_uid). The same method can be used to delete any item from repository (model, training_definition, experiment). It is documented in python client docs btw:
delete(artifact_uid)
Delete model, definition or experiment from repository.
Parameters: artifact_uid ({str_type}) – stored model, definition, or experiment UID
A way you might use me is:
>>> client.repository.delete(artifact_uid)
Training_run is completely different thing than training_definition.
You can also remove it if needed:
delete(run_uid)
Delete training run.
Parameters: run_uid ({str_type}) – ID of trained model
A way you might use me is:
>>> client.training.delete(run_uid)
You can also remove the experiment_run if needed by calling:
delete(experiment_run_uid)
Delete experiment run.
Parameters: experiment_run_uid ({str_type}) – experiment run UID
A way you might use me is
>>> client.experiments.delete(experiment_run_uid)
Please refer to python client docs for more details: http://wml-api-pyclient-dev.mybluemix.net/
Related
I have SOAR that uses boltDB to host it's incidents.
I want to take that boltDB copy over to DEV environment and leverage its data without compromising PROD data.
New to BoltDB; are there tools available for me to review / query bolt DB database. Ultimately looking to see if I can script a solution to scramble certain values within the boltDB?
I am wondering what approach in designing serverless functions to take, while taking designing a regular server as a point of reference.
With a traditional server, one would focus on defining collections and then CRUD operations one can run on each of them (HTTP verbs such as GET or POST).
For example, you would have a collection of users, and you can get all records via app.get('/users', ...), get specific one via app.get('/users/{id}', ...) or create one via app.post('/users', ...).
How differently would you approach designing a serverless function? Specifically:
Is there a sense in differentiating between HTTP operations or would you just go with POST? I find it useful to have them defined on the client side, to decide if I want to retry in case of an error (if the operation is idempotent, it will be safe to retry etc.), but it does not seem to matter in the back-end.
Naming. I assume you would use something like getAllUsers() when with a regular server you would define collection of users and then just use GET to specify what you want to do with it.
Size of functions: if you need to do a number of things in the back-end in one step. Would you define a number of small functions, such as lookupUser(), endTrialForUser() (fired if user we got from lookupUser() has been on trial longer than 7 days) etc. and then run them one after another from the client (deciding if trial should be ended on the client - seems quite unsafe), or would you just create a getUser() and then handle all the logic there?
Routing. In serverless functions, we can't really do anything like .../users/${id}/accountData. How would you go around fetching nested data? Would you just return a complete JSON every time?
I have been looking for some comprehensive articles on the matter but no luck. Any suggestions?
This is a very broad question that you've asked. Let me try answering it point by point.
Firstly, the approach that you're talking about here is the Serverless API project approach. You can clone their sample project to get a better understanding of how you can build REST apis for performing CRUD operations. Start by installing the SAM cli and then run the following commands.
$ sam init
Which template source would you like to use?
1 - AWS Quick Start Templates
2 - Custom Template Location
Choice: 1
Cloning from https://github.com/aws/aws-sam-cli-app-templates
Choose an AWS Quick Start application template
1 - Hello World Example
2 - Multi-step workflow
3 - Serverless API
4 - Scheduled task
5 - Standalone function
6 - Data processing
7 - Infrastructure event management
8 - Machine Learning
Template: 3
Which runtime would you like to use?
1 - dotnetcore3.1
2 - nodejs14.x
3 - nodejs12.x
4 - python3.9
5 - python3.8
Runtime: 2
Based on your selections, the only Package type available is Zip.
We will proceed to selecting the Package type as Zip.
Based on your selections, the only dependency manager available is npm.
We will proceed copying the template using npm.
Project name [sam-app]: sample-app
-----------------------
Generating application:
-----------------------
Name: sample-app
Runtime: nodejs14.x
Architectures: x86_64
Dependency Manager: npm
Application Template: quick-start-web
Output Directory: .
Next steps can be found in the README file at ./sample-app/README.md
Commands you can use next
=========================
[*] Create pipeline: cd sample-app && sam pipeline init --bootstrap
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
Comings to your questions point wise:
Yes, you should differentiate your HTTP operations with their suitable HTTP verbs. This can be configured at the API Gateway and can be checked for in the Lambda code. Check the source code of handlers & the template.yml file from the project you've just cloned with SAM.
// src/handlers/get-by-id.js
if (event.httpMethod !== 'GET') {
throw new Error(`getMethod only accepts GET method, you tried: ${event.httpMethod}`);
}
# template.yml
Events:
Api:
Type: Api
Properties:
Path: /{id}
Method: GET
The naming is totally up to the developer. You can follow the same approach that you're following with your regular server project.
You can define the handler with name getAllUsers or users and then set the path of that resource to GET /users in the AWS API Gateway. You can choose the HTTP verbs of your desire. Check this tutorial out for better understanding.
Again this up to you. You can create a single Lambda that handles all that logic or create individual Lambdas that are triggered one after another by the client based on the response from the previous API. I would say, create a single Lambda and just return the cumulative response to reduce the number of requests. But again, this totally depends on the UI integration. If your screens demand separate API calls, then please, by all means create individual lambdas.
This is not true. We can have dynamic routes specified in the API Gateway.
You can easily set wildcards in your routes by using {variableName} while setting the routes in API Gateway.
GET /users/{userId}
The userId will then be available at your disposal in the lambda function via event.pathParameters.
GET /users/{userId}?a=x
Similarly, you could even pass query strings and access them via event.queryStringParameters in code. Have a look at working with routes.
Tutorial I would recommend for you:
Tutorial: Build a CRUD API with Lambda and DynamoDB
I am trying to restore encrypted DB to non-encryped DB. I made changes by setting piDbEncOpts to SQL_ENCRYPT_DB_NO but still restore is being failed. Is there db2 sample code is there where I can check how to set "NO Encrypt" option in DB2. I am adding with below code snippet.
db2RestoreStruct->piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO
The 'C' API named db2Restore will restore an encrypted-image to a unencrypted database , when used correctly.
You can use a modified version of IBM's samples files: dbrestore.sqc and related files, to see how to do it.
Depending on your 'C' compiler version and settings you might get a lot of warnings from IBM's code, because IBM does not appear to maintain the code of their samples as the years pass. However, you do not need to run IBM's sample code, you can study it to understand how to fix your own C code.
If installed, the samples component must match your Db2-server version+fixpack , and you must use the C include files that come with your Db2-server version+fixpack to get the relevant definitions.
The modifications to IBM's samples code include:
When using the db2Restore API ensure its first argument has a value that is compatible with your server Db2-version-and-fixpack to access the required functionality. If you specify the wrong version number for the first argument, for example a version of Db2 that did not support this functionality, then the API will fail. For example, on my Db2-LUW v11.1.4.6, I used the predefined db2Version1113 , like this:
db2Restore(db2Version1113, &restoreStruct, &sqlca);
When setting the restore iOptions field: enable the flag DB2RESTORE_NOENCRYPT, for example, in IBM's example, include the additional flag: restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB | DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD | DB2RESTORE_NOENCRYPT;
Ensure the restoredDbAlias differs from the encrypted-backup alias name.
I tested with Db2 v11.1.4.6 (db2Version1113 in the API) with gcc 9.3.
I also tested with Db2 v11.5 (db2Version11500 in the API) with gcc 9.3.
I trained a model using watson machine learning service. The training process has completed so I ran these command lines to deploy it:
bx ml store training-runs model-XXXXXXX
I get the output with the model ID
Starting to store the training-run 'model-XXXXXX' ...
OK
Model store successful. Model-ID is '93sdsdsf05-3ea4-4d9e-a751-5bcfbsdsd3391'.
Then I use the following to deploy it :
bx ml deploy 93sdsdsf05-3ea4-4d9e-a751-5bcfbsdsd3391 "my-test-model"
The problem is that I'm getting an endless message saying:
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
When I check in COS result bucket the model size is ~25MB so it shouldn't be that long to deploy. Am I missing something here ?
Deploying the same model using Python Client API:
from watson_machine_learning_client import WatsonMachineLearningAPIClient
client = WatsonMachineLearningAPIClient(wml_credentials)
deployment_details = client.deployments.create( model_id, "model_name")
This showed me very quickly that there is an error with the deployment. The strange thing is that the error doesn't pop up when deploying with command line interface (CLI).
When I use data factory to update Azure ML models like the document said (https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-azure-ml-update-resource-activity),
I faced one problem:
The blob reference: test/model.ilearner has an invalid or missing file extension. Supported file extensions for this output type are: ".csv, .tsv, .arff".'.
I have searched the problem and found this solution:
https://disqus.com/home/discussion/thewindowsazureproductsite/data_factory_create_predictive_pipelines_using_data_factory_and_machine_learning_microsoft_azure/ .
But my linked service for the outputs of training service pipeline and update service pipeline are already different.
How can I solve this problem?