Talend Studio : My job worked until print the error " routine signature not found " - talend

I have created a job in Talend where data comes from an Access database and goes to a Postgres database. Everything was good, my job launched normally, and at one point when I launched or relaunched my codes, the following message appeared:
[WARN ] 09:26:31 com.healthmarketscience.jackcess.Index- unsupported collating sort order SortOrder[1036(0)] for text index (Db=Donnees_irrig_v5.accdb;Table=SR_bd_carthage_MP;Index=0), making read-only
WARNING:routine signature not found for: PUBLIC.LEN(INTEGER)
What I found in the internet is about Java, but I can't translate it in Talend.

Related

Synapse suddenly started having problem about having hash distribution column in a Merge

I started geting error at 06-25-2022 in my Fact table flows. Before that there was no problem and nothing has changed.
The Error is:
Operation on target Fact_XX failed: Operation on target Merge_XX failed: Execution fail against sql server. Sql error number: 100090. Error Message: Updating a distribution key column in a MERGE statement is not supported.
Sql error number: 100090. Error Message: Updating a distribution key column in a MERGE statement is not supported.
You got this error because updating a distribution key column through MERGE command is not supported in azure synapse currently.
The MERGE is currently in preview for Azure Synapse Analytics.
You can refer to the official documentation for more details about this as given below: -
https://learn.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true
It clearly states that The MERGE command in Azure Synapse Analytics, which is presently in preview, may under certain conditions, leave the target table in an inconsistent state, with rows placed in the wrong distribution, causing later queries to return wrong results in some cases.

AWS Glue job throwing Null pointer exception when writing df

I am trying to write a job to read data from S3 and write to BQ db (using connector), running the same script for other tables and it is working correctly, but for one of the tables the write is not working.
It is working on the first run, but after first load the incremental runs throws this null pointer exception error. I have bookmarks enabled to fetch new data added in S3 and write to BQ database.
I am already handling the new data check, if there are files to process then proceed else abort job.
In the job logs df is printing and count is printing too, everything seems to be working but as it runs the write df command the job fails.
I am not sure what is the cause. Had tried to make the nullability of source and target to be same too, by setting the nullable property of source to True same as target, but it still fails.
Unable to understand the null pointer exception thrown.
Error: Caused by: java.lang.NullPointerException at com.google.cloud.bigquery.connector.common.BigQueryClient.loadDataIntoTable(BigQueryClient.java:532) at com.google.cloud.spark.bigquery.BigQueryWriteHelper.loadDataToBigQuery(BigQueryWriteHelper.scala:87) at com.google.cloud.spark.bigquery.BigQueryWriteHelper.writeDataFrameToBigQuery(BigQueryWriteHelper.scala:66) ... 42 more
The BQ connector by AWS had a bug. This was resolved when I contacted the AWS team and they suggested to use previous version of the connector.
So, using previous version of connector helped me resolve the issue.

AEM-REX-001-008: Unable to apply the requested usage rights to the given document

We have developed two interactive XDP which have some pre-population and binding of data and they are generated in interactive PDF. Whenever we deploy the XDP in our ETE environment everything is perfect and works fine. We have developed a rest API which generate the PDF and bind values from front end.
The problem is whenever we deploy the XDP in QA environment and try to consume and bind dynamic values to XDP and generate the same PDF documents consuming the same rest API we get failure in generating the documents. I check the error logs of AEM instance and I am getting below. Please can somebody help me out here as we are not able to find what is the root cause of this failure specific to QA Environment.
09.07.2019 16:53:13.307 *ERROR* [10.52.160.35 [1562683992994] POST /content/AemFormsSamples/renderpdfform.html HTTP/1.1] com.adobe.fd.readerextensions.service.impl.ReaderExtensionsServiceImpl AEM-REX-001-008: Unable to apply the requested usage rights to the given document.
java.lang.NullPointerException: null
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7242)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)
at com.adobe.xfa.form.FormModel.preSave(FormModel.java:7159)

Could not initialize class com.ibm.ws.ffdc.FFDCFilter. DSRA0010E: SQL State = 28P01, Error Code = 0

Can I get assistance with the error codes coming from eclipse when I try to deploy enterprise application on websphere. I followed craig st jean, I also face another problem with configuration i.e websphere data sources using postgresql. i am using a windows machine, 64bit arch. the error codes are the topic of this question. i hope this question can be seen as relevant, since not much solutions exist for the first issue concerning com.ibm.ws.ffdc.FFDCFilter, thus if one doesn't overcome the first, how can one press on and attempt to solve the second. thanks.
Webspere logs
The test connection operation failed for data source AppDb on server server1 at node Lenovo-PCNode01 with the following exception: java.sql.SQLException: FATAL: password authentication failed for user "listmanagerremote" DSRA0010E: SQL State = 28P01, Error Code = 0. View JVM logs for further details.
I have fixed the issues with deployment in the eclipse neon IDE. I think it is either as a result of the installation of the IBM WebSphere Application Server Traditional v8.0x Developer tools for Neon, and IBM jre.
Eclipse console final message
00000063 CompositionUn A WSVR0191I: Composition unit WebSphere:cuname=ListManager in BLA WebSphere:blaname=ListManager started.
Postgre documents the 28P01 SQL State as an invalid password:
"28P01 INVALID PASSWORD invalid_password"
https://www.postgresql.org/docs/9.0/static/errcodes-appendix.html
Check your data source configuration to ensure that you have specified the correct password, or if using an authentication alias for your data source, confirm that the authentication data configuration contains the correct password, and that you have configured the data source and/or resource reference to use that authentication data.

IBM Integration Bus: The PIF data could not be found for the specified application

I'm using IBM Integration Bus v10 (previously called IBM Message Broker) to expose COBOL routines as SOAP Web Services.
COBOL routines are integrated into IIB through MQ queues.
We have imported some COBOL copybooks as DFDL schemas in IIB, and the mapping between SOAP messages and DFDL messages is working fine.
However, when the message reaches a node where a serialization of the message tree has to take place (for example, a FileOutput or a MQ request), it fails with the following error:
"The PIF data could not be found for the specified application"
This is the last part of the stack trace of the exception:
RecoverableException
File:CHARACTER:F:\build\slot1\S000_P\src\DataFlowEngine\TemplateNodes\ImbOutputTemplateNode.cpp
Line:INTEGER:303
Function:CHARACTER:ImbOutputTemplateNode::processMessageAssemblyToFailure
Type:CHARACTER:ComIbmFileOutputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_5
Label:CHARACTER:MyCustomFlow.File Output
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:2230
Text:CHARACTER:Caught exception and rethrowing
Insert
Type:INTEGER:14
Text:CHARACTER:Kcilmw20Flow.File Output
ParserException
File:CHARACTER:F:\build\slot1\S000_P\src\MTI\MTIforBroker\DfdlParser\ImbDFDLWriter.cpp
Line:INTEGER:315
Function:CHARACTER:ImbDFDLWriter::getDFDLSerializer
Type:CHARACTER:ComIbmSOAPInputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_7
Label:CHARACTER:MyCustomFlow.SOAP Input
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:5828
Text:CHARACTER:The PIF data could not be found for the specified application
Insert
Type:INTEGER:5
Text:CHARACTER:MyCustomProject
It seems like something is missing in my deployable BAR file. It's important to say that my application has the message flow and it depends on a shared library that has all the .xsd files (DFDLs).
I suppose that the schemas are OK, as I've generated them using the Toolkit wizard, and the message parsing works well. The problem is only with serialization.
Does anybody know what may be missing here?
OutputRoot.Properties.MessageType must contain the name of the message in the DFDL schema. Additionally when the DFDL schema is in a shared library, OutputRoot.Properties.MessageSet must contain the name of the library.
Sounds as if OutputRoot.Properties is not pointing at the shared library. I cannot remember which subfield does that job - it is either OutputRoot.Properties.MessageType or OutputRoot.Properties.MessageSet.
You can easily check - just check the contents of InputRoot.Properties after an input node that has used the same shared libary.
Faced a similar problem. In my case, a message flow with an HttpRequest node using a DFDL domain parser / format to parse an HTTP response from the remote system threw this error (PIF data could not be found for the specified application). "Re-selecting" the same parser domain & message type on the node followed by build / redeploy solved the problem. Seemed to be a project reference related issue within the IIB toolkit.
you need to create static libraries and refer to application.
in compute node ur coding is based on dfdl body