Unable to use SQL View in DataStage ODBC Connector - datastage

Trying to use the SQL data in DataStage using ODBC Connector . Tried below options but was not succeeded
Used CTE query as my data has duplicate records but got the following error(Number of bound columns exceeds the number of result columns) as am using Row Number. This works fine in the SQL server
2.To resolve this , created a view in the SQL and tried using the same view in DataStage ODBC Connector but getting invalid object name error
Is there any better approach I can follow to get the data from SQL and used it in DataStage.

Related

View query used in Data Factory Copy Data operation

In Azure Data Factory, I'm doing a pretty vanilla 'Copy Data' operation. One dataset to another.
Can I view the query being used to perform the copy operation? Apparently, it's a syntax error, but I've only used drag-and-drop menus. Here's the error:
ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A
database operation failed. Please search error to get more
details.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=Incorrect
syntax near the keyword 'from'.,Source=.Net SqlClient Data
Provider,SqlErrorNumber=156,Class=15,ErrorCode=-2146232060,State=1,Errors=[{Class=15,Number=156,State=1,Message=Incorrect
syntax near the keyword 'from'.,},],'
Extra context
1.Clear Schema and import Schema again.
2. This mostly happens when there are some changes to table schema after creating pipeline and datasets. verify once.
3.The schema and datasets should be refreshed when there are some changes in the SQL table schema.
4. For table name or view name used in the query, use []. ex: [dbo].[persons]
5. In datasets select table name
6. try to publish before testing.
I followed the same scenario and reproduced the same thing. I didn’t get any error as you can see, the above error mainly happens because of schema.
Source dataset:
In source dataset, I manually added and connected to SQL database with Sample SQL table.
Sink dataset:
In sink dataset, I added another SQL database with auto create table, write behavior: Upset, key column: PersonID.
Before execution there is no table on SQL database, then after the execution was successful, I got this sample output in azure SQL database.

how to use parameterized Query in after sql in datastage?

I have to create a table in DB2 and read the query from file in Before/After Sql tab in Datastage.
I am using DB2 connector for this.
I have also parameterized the query but getting below error-
an unexpected token was found '/'.
create table Temp as(#Query#) with data
can u help in suggesting how can i achieve this successfully. Thanks in advance
Try loading the entire query into the parameter, rather than just the table name.

Inserting records from Firebrid database table into a SQL Server table

I have a application which uses Firebird (Version 2.5) database. I wanted to trigger one of the table entry to another database table which is in SQL Server 2008 R2. When I commit I am getting this following error
ErrorCode: 335544569 (ErrorMessage: Dynamic SQL Error SQL error code = -104).
Code:
CREATE TRIGGER "trig_INV"
FOR "INVA"
ACTIVE
AFTER UPDATE
POSITION 100
AS
BEGIN
IF ((updating) AND ((old.cold <> new.cold))) THEN
BEGIN
INSERT INTO 192.168.17.206/1043: [RBT].[dbo].[N_Inv]([COLA], [COLB], [COLC], [COLD], [COLD], [COLE])
SELECT FIRST 1
"COLA", "COLB", "COLC", "COLD", "COLE"
FROM "INVA"
ORDER BY COLA DESC
END
I am not sure firebird trigger allow to push records to a SQL Server database. It will be great if anyone has tried such and provide some reference. Thanks in advance.
You get that error because the syntax you're using doesn't exist in Firebird. Firebird has no support to connect to other database systems than Firebird (in theory you could write a provider that allows connecting to other databases systems, but as far as I know, none exist in reality).
If you want to synchronize to a different database system, you will either need to write a UDF or UDR (the replacement of UDFs introduced in Firebird 3) to do this for you, or a custom application that provides synchronization, or use third-party software to do this (for example, SymmetricDS).

How to store result set or jobid of sql query on cloud?

I am pushing data from a SQL query on cloud to db2 on cloud, when I am querying data and storing it in my s3 bucket it is saving jobid of that particular query result.
But when I am pushing data into db2 it is not saving any jobid, in fact data has been inserted. I am checking by going again on db2.
But how do I came to know on SQL query that my query has succeeded or not? I want to confirm it when I am running the SQL query on cloud.
My query:
select
a.col, a.col2, explode (a.col3) company_list
from
cos://us-east/akshay-test-bucket1/Receive-2020-03-11/CP-2020-03-11/ STORED AS PARQUET c
into
crn:v1:bluemix:public:dashdb-for-transactions:eu-gb:a/2c36cd86085f442b915f0fba63138e0c:61f353e4-6640-4599-b1dd-48ee52ee008d::/schema_name.table_name
Here I am storing data into db2 and SQL query is saying "A preview is only possible for results stored in Cloud Object Storage" as you can see in above screenshot and see my query.

Getting Mapping error in OLE DB Destination when use SQL Command under Data Access Mode

I am working SSIS integration using Visual Studio 2017. I have tested 'Data Access Mode' Table Or View which did work. Now I need to filter data and I am trying to use 'Data Access Mode' SQL Command and getting Mapping issue in OLE DB Destination Editor.
In Destination Data outsource, I use OLE DB Destination and under 'Component Properties' SQL Command I type following script
INSERT INTO User_Filtered
(UserGUID
,IdNum
,FirstName
,LastName
,Email
,PostCode)
VALUES
(?,?,?,?,?,?)
In mapping getting error
error
error at user dataflow [ole db destination] no column information was returned by the SQL Command
Under the OLD DB Source I typed following script under the SQL Command dataview, which seems fine
SELECT u.*
FROM [AnotherServer].[dbo].[Users] AS u
where [email] IS NOT NULL AND [org] IS NOT NULL
In the OLE DB Destination you only need to use a SELECT statement with the columns that will be inserted into, not an INSERT statement. After this you can map the columns on the Mappings page.