Database script encounterd "AWKDBE018E Cannot access required JDBC Driver folder" in Workload Scheduler - scheduler

I create a step of database script which access to SQL Database Service in Workload Scheduler Service. When I run the process, the step encountered the error below.
error message
AWKDBE018E Cannot access required JDBC Driver folder
message information
http://www-01.ibm.com/support/knowledgecenter/SSGSPN_9.2.0/com.ibm.tivoli.itws.doc_9.2/common/src_ms/awsmsawkdbe.htm?lang=en
AWKDBE018E Cannot access required JDBC Driver folder
Explanation
The job was not able to access a JDBC Driver folder, you might not
have enough permissions.
System action
The operation is not performed.
Operator response
Verify that you have enough permissions.
This message seems to ask me to grant the proper authority to the job user. But there is no property to specify the job user of Workload Automation Agent. I use a Workload Automation Agent provisioned by Bluemix automatically.
Could you teach me which parameters are needed ?
Database script step information
JDBC driver class path info
I checked the path by the following "ls -lR" command step's log.

it seems to have a problem with the agent, I tried to replicate the same job type but it is not working with the same error message (even using different solutions for jdbc driver path).
If you are using the Workload Automation Agent that is created for you then you could open a support ticket to have the Workload team look at that agent.
Edit after having support from service team:
in the jar classpath field for a predefined workload scheduler process you have to put only the path to the directory containing jar files, without putting the jar file name to use.
So, according to current Workload Scheduler documentation, you have to use the following value:
/home/wauser/utils
By this way the database script works fine.
(screenshot added)

It looks like it is having issues referencing the location to the JDBC class path for DB2. Can you please double check the location for the class path for the DB2 driver?

Even though old, I wanted to make some quick checks.
This is tested on a 9.5 FP1 dynamic agent, part of the container delivery. The path values are the standard values for the container.
Try 1 - full path - SUCCESS
<jsdldatabase:driverPath>/opt/wa/TWS/jdbcdrivers/db2/</jsdldatabase:driverPath>
= Status Message: Success
= Exit Status : 0
Try 2 - relative path - FAIL
<jsdldatabase:driverPath>./jdbcdrivers/db2/</jsdldatabase:driverPath>
Job status : FAIL
===============================================================
AWKDBE018E Cannot access required JDBC Driver folder
===============================================================
Try3 - variable in path - FAIL
<jsdldatabase:driverPath>${UNISONHOME}/jdbcdrivers/db2/</jsdldatabase:driverPath>
===============================================================
AWKDBE018E Cannot access required JDBC Driver folder
===============================================================
Try4 - variable in path - FAIL
<jsdldatabase:driverPath>$UNISONHOME/jdbcdrivers/db2/</jsdldatabase:driverPath>
===============================================================
AWKDBE018E Cannot access required JDBC Driver folder
===============================================================
So put short you need an absolute path into that parameter.
BUT, you can set the path in a config file global to the agent
Try5 - variable in agent config -
Inside IWSDATA Home : wadata/JavaExt/cfg/DatabaseJobExecutor.properties, write the following line
jdbcDriversPath=/opt/wa/TWS/jdbcdrivers
then remove the xml element about driver from the job, so no line
<jsdldatabase:driverPath>/opt/wa/TWS/jdbcdrivers/db2/</jsdldatabase:driverPath>
===============================================================
= Exit Status : 0
Note that in this case the jdbcdrivers/db2 is not needed. It will search for subdirectories.

Related

Failure/timeout invoking Lambda locally with SAM

I'm trying to get a local env to run/debug Python Lambdas with VSCode (windows). I'm using a provided HelloWorld example to get the hang of this but I'm not being able to invoke.
Steps used to setup SAM and invoke the Lambda:
I have Docker installed and running
I have installed the SAM CLI
My AWS credentials are in place and working
I have no connectivity issues and I'm able to connect to AWS normally
I create the SAM application (HelloWorld) with all the files and resources, I didn't change anything.
I run "sam build" and it finishes sucessfully
I run "sam local invoke" and it fails with timeout. I increased the timeout to 10s, still times out. The HelloWorld Lambda code only prints and does nothing else, so I'm guessing the code isn't the problem, but something else relating to the container or the SAM env itself.
C:\xxxxxxx\lambda-python3.8>sam build Your template contains a
resource with logical ID "ServerlessRestApi", which is a reserved
logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
Building codeuri:
C:\xxxxxxx\lambda-python3.8\hello_world runtime: python3.8 metadata:
{} architecture: x86_64 functions: ['HelloWorldFunction'] Running
PythonPipBuilder:ResolveDependencies Running
PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam\build Built Template :
.aws-sam\build\template.yaml
C:\xxxxxxx\lambda-python3.8>sam local invoke Invoking
app.lambda_handler (python3.8) Skip pulling image and use local one:
public.ecr.aws/sam/emulation-python3.8:rapid-1.51.0-x86_64.
Mounting C:\xxxxxxx\lambda-python3.8.aws-sam\build\HelloWorldFunction
as /var/task:ro,delegated inside runtime container Function
'HelloWorldFunction' timed out after 10 seconds
No response from invoke container for HelloWorldFunction
Any hints on what's missing here?
Thanks.
Mostly, a lambda function gets timed out because of some resource dependency. Are you using any external resource, maybe db connection or some REST API call ?
Please put more prints in lambda_handler(your function handler), before calling any resource, then you might know where exactly it is waiting. Also increase the timeout to 1 minute or more because most of the external resource call over HTTPS will have 30 secs timeouts.
The log suggests that either the container wasn't started, or SAM couldn't connect to it.
Sometimes the hostname resolution on Windows can be affected by hosts file or system settings.
Try running the invoke command as follows (this will make the container ports bind to all interfaces):
sam local invoke --container-host-interface 0.0.0.0
...additionally try setting the container-host parameter (set to localhost by default):
sam local invoke --container-host-interface 0.0.0.0 --container-host host.docker.internal
The next piece of puzzle is incorporating these settings into VSCODE. This can to be done in two places:
create samconfig.toml in the root dir of the project with the following contents. This will allow running sam local invoke from the terminal without having to add the command line argument:
version=0.1
[default.local_invoke.parameters]
container_host_interface = "0.0.0.0"
update launch configuration as follows to enable VSCode debugging:
...
"sam": {
"localArguments": ["--container-host-interface","0.0.0.0"]
}
...

td-agent does not validate google cloud service account credentials

Trying to configure fluentd output with td-agent and the fluent-google-cloud plugin. The plugin and all dependencies are loaded but fluentd is not outputting to google cloud logging and the td-agent log states error="Unable to read the credential file specified by GOOGLE_APPLICATION_CREDENTIALS: file /home/$(whoami)/.config/gcloud/service_account_credentials.json does not exist".
However when I go to the file path, the file does exist and the $GOOGLE_APPLICATION_CREDENTIALS variable is set to the file path as well.What should I do to fix this?
On the assumption that the error and you are both correct, I suspect (!) that you're using your user account ( == whoami) and finding /home/$(whoami)/.config/gcloud while the agent is running (under systemctl?) as root and not finding the credentials file there (perhaps /root/.config/gcloud.
It would be helpful if you included more details as to what you've done in order that we can better understand the issue.

How to read a local csv file using Azure Data Factory and a self-hosted runtime?

I have a Windows Server VM with the ADF Integration Runtime installed running under a local account called deploy. This account is a member of the local admins group. The server is not domain-joined.
I created a new linked service (File System) and pointed it to a csv file on the root of the C drive as a test. When I test the connection I get Connection failed.
Error occurred when trying to access the file in Folder 'C:\etr.csv', File filter: ''. The directory name is invalid. Activity ID: 1b892702-7cc3-48d5-83c7-c680d6d15afd.
Any ideas on a fix?
The linked service needs to be a folder on the target machine. In your screenshot, change C:\etr.csv to C:\ and then define a new dataset that uses the linked service to select etr.csv.
The dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. So the linked service should point to the folder instead of file. It should be C:\ instead of C:\etr.csv

Azure batch Application package not getting copied to Working Directory of Task

I have created Azure Batch pool with Linux Machine and specified Application Package for the Pool.
My command line is
command='python $AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py',
python3: can't open file '$AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py':
[Errno 2] No such file or directory
when i connect to node and look at working directory non of the Application Package files are present there.
How do i make sure that files from Application Package are available in working directory or I can invoke/execute files under Application Package from command line ?
Make sure that your async operation have proper await in place before you start using the package in your code.
Also please share your design \ pseudo-code scenario and how you are approaching it as a design?
Further to add:
Seems like this one is pool level package.
The error seems like that the application env variable is either incorrectly used or there is some other user level issue. Please checkout linmk below and specially the section where use of env variable is mentioned.
This seems like user level issue because In case of downloading the package resource, if there will be an error it will be visible to you via exception handler or at the tool level is you are using batch explorer \ Batch-labs or code level exception handling.
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Reason \ Rationale:
If the pool level or the task application has error, an error-list will come back if there was an error in the application package then it will be returned as the UserError or and AppPackageError which will be visible in the exception handle of the code.
Key you can always RDP into your node and checkout the package availability: information here: https://learn.microsoft.com/en-us/azure/batch/batch-api-basics#connecting-to-compute-nodes
I once created a small sample to help peeps around so this resource might help you to checkeout the use here.
Hope rest helps.
On Linux, the application package with version string is formatted as:
AZ_BATCH_APP_PACKAGE_{0}_{1}
On Windows it is formatted as:
AZ_BATCH_APP_PACKAGE_APPLICATIONID#version
Where 0 is the application name and 1 is the version.
$AZ_BATCH_APP_PACKAGE_scriptv1_1 will take you to the root folder where the application was unzipped.
Does this "exact" path exist in that location?
tasks/XXX/get_XXXXX_data.py
You can see more information here:
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Edit: Just saw this question: "or can I invoke/execute files under Application Package from command line"
Yes you can invoke and execute files from the application package directory with the environment variable above.
If you type env on the node you will see the environment variables that have been set.

WSO2 API MANAGER clustering Worker-Manager

This is regarding WSO2 API Manager Worker cluster configuration with external Postgres db. I have used 2 databases i.e wso2_carbon for registry and user management and the wso2_am, for storing APIs. Respective xmls have been configured. The postgres scripts have been run to create the database tables. My log console when wso2server.sh is run, shows enabled clustering and the members of the domain. However on the https://: when I try to create to create APIs, it throws and error in the design phase itself.
ERROR - add:jag org.wso2.carbon.apimgt.api.APIManagementException: Error while checking whether context exists
[2016-12-13 04:32:37,737] ERROR - ApiMgtDAO Error while locating API: admin-hello-v.1.2.3 from the database
java.sql.SQLException: org.postgres.Driver cannot be found by jdbc-pool_7.0.34.wso2v2
As per the error message, the driver class name you have given is org.postgres.Driver which is not correct. It should be org.postgresql.Driver. Double check master-datasource.xml config.