Cannot dynamically load Context Variables - talend

I am trying to load the Context Variables dynamically with the data from database but I am not able to load the context variables in Talend.
My Job flow is
My Schema of the Oracle Output is
Column1 - BigDecimal
Column2 - BigDecimal
I am trying to load these values to the contexts I had created
Please anyone help on this issue why the context variables are not loaded.
Also I am not able to edit the schema of tContextLoad
Edit: I had edited my Schema as Key - String and Value - String by taking the values a String from database but still tContextLoad does not load the Context Variables it loads only the Key and the value
Edit 1: I had converted the BigDecimal to String in the Database Query itself so no need of loading BigDecimal in Context Variables, I need the Col1 - String DB value (mutiple values) to be stored in Var1 - String Context Variable and Col2 - String DB Value (Multiple Values) in Var2 - String Context Variable
Edit 3: Updated Workflow to handle multiple values

My Schema of the Oracle Output is Column1 - BigDecimal Column2 - BigDecimal
This is the problem : tContextLoad will only accept a schema key/value, key and value both of String type.
You must change the name and type of the column you get from the database (in the query for instance).
Also I am not able to edit the schema of tContextLoad.
Yes, it's one of the component that has defined columns (indicated with green).

Related

Setting the column data type in databases

Error for written code Current model imports certain parameters from an excel file. Hoping to allow users to override the existing values in the database through an editbox. However, I'm faced with the error (shown in attached image). The imported data is column type is in integer type, while the set function requires input of double type. I've tried placing (double) db_parameters.duration_sec and it fails too. Is there any way to replace an imported data to the data type that is required? Will not want to manually change the data type under the database fields as I may need to re-import the excel sheet from time to time which will auto reset the columns back to integer type. Thanks!
Your query should look like this:
update(db_parameters)
.where(db_parameters.tasking.eq("Receive Lot"))
.set(db_parameters.duration_sec, (int)v_ReceiveLot)
.execute();
the (int) has to be on the paremeter not on the column.

Convert String to Int in Azure data factory Derived column expression

I've created a dataflow task in azure data factory and used derived column transformation. One of the source derived column value is '678396' which is extracted through Substring function and datatype "String" by default. I want to convert it into "Integer" because my target column datatype is "Integer".
I've to converted the column in this expression:
ToInteger(Substring(Column_1,1,8))
Please help me with correct expression.
Kind regards,
Rakesh
You don't need to build the expression. If you column data are all like int string "678396", or the output of Substring(Column_1,1,8) are int String
Data Factory can convert the int string to integer data type directly from source to sink. We don't need convert again.
Make sure you set column mapping correctly in sink settings. All things would works well.
Update:
This my csv dataset:
You can choose the Quote character to singe quote, then could solve the problem. See the source data preview in Copy active and Data Flow:
Copy active source:
Data Flow overview:
In data flow, we will get the alert like you said comment, we could ignore it and debug the data flow directly:
HTH.
you don't even need to substruct quotes '', as ToInteger function can convert numbers as string type

How to use azure data factory migrate table in storage account, that have column is many type

I want to use Data Factory to migrate data in the storage account, but data in the original table is a many type ex: some data in column int, String, DateTime.
When I use Data Factory I need to specify the data type, so how I can definite dynamic type and copy column. Because all data migrate parsed to String type, so how can I keep value type of column?
This my data in the original table
enter image description here
Thanks for your help
According my experience in Data factory, Data Factory can not help you keep value type of column in source table. You must specify the data type in sink dataset.
Copy Data:
As you have tried, if you didn't set the sink data type, the column type will passed String in default.
I have an idea is that cope the data twice, each time copy the different entity column. The sink dataset support 'Merge' and 'Replace'.
Hope this helps.
Not sure if I am understanding the question , but let me first put forward my understanding , you want to copy a table lets say sourceT1 to SinkT1 , if that's the case you can always use the copy activity and then map the columns . When you map the columns it does set the data type also .

How to use a dynamic comma separated String value as input for a List()?

I'm building a Spark Scala application that dynamically lists all tables in a SQL Server database and then loads them to Apache Kudu.
I'm building a dynamic string variable that tracks the primary key columns for each table. The primary keys are comma separated within the variable. The following is an example of my variable value:
PrimaryKeys=storeId,storeNum,custId
The following is a required function that I must enter a List[String] as input (which primary keys is definitely not correct):
setRangePartitionColumns(List("storeId","storeNum","custId").asJava
If I just use the PrimaryKeys variable for the List input (like the following), it only works for a single column (and would fail in this example with 3 comma separated values):
setRangePartitionColumns(List(PrimaryKeys).asJava
The following is another example, but using a Seq(). I"m supposed to put the same Primary Key column names in the same format below. Manually typing the column names works fine, however I can not figure out how to dynamically input the variable values:
kuduContext.createTable(tableName, df.schema, Seq(PrimaryKey), kuduTableOptions)
Any idea how I can parse the variable PrimaryKey dynamically and feed it into either function regardless of the number of comma separated values included?
Any assistance is greatly appreciated.

OrientDB force property type to be string

I'm using OrientDB and trying to create new property after I inserted my data (millions of rows).
I'm trying to create property on V in order to create an index and I'm getting the following error:
The database contains some schema-less data in the property
'V.ACCOUNT_NO' that is not compatible with the type STRING. Fix those
records and change the schema again [ONetworkProtocolHttpDb]
Now part of the fields type is INTEGER but it seems to me that it's very easy to convert the type to STRING.
how can I do it to the entire data?
I tried your case by creating this simple structure in schema-less mode:
These records are a mix of INTEGER and STRING types:
Now you can convert the not string records type by using this query:
UPDATE V SET ACCOUNT_NO = ACCOUNT_NO.asString() WHERE ACCOUNT_NO.type() <> 'STRING'
Output:
About the exception, I got correctly the same error when I try to create a new property V.ACCOUNT_NO of type STRING in schema-full mode and this is correct because the property already exists in the database and contains mixed types of records, although in schema-less mode.
Once all the records were converted, you'll able to create the new property.
Hope it helps