Declare Date Type Global Variable and pass that to Child Job from Parent Job - talend

I want to create a Date type global variable which I want to pass to a child job. I am able to pass String and Integer type to child one. But getting hard luck with Date type as shown in this image:
Below is the code I am writing in my tjavarow. In the child job context the variables pass this global variable's value. And I'm using those, now, in the child job toracleInput. I have to use the two Date type Contexts where the global variable Value will be coming from the parent job.
This is my code:
String fromdate_file_epsilon=input_row.START_DATE1;
globalMap.put("fromdate_file_epsilon", fromdate_file_epsilon);
String todate_file_epsilon = input_row.END_DATE1;
globalMap.put("todate_file_epsilon",todate_file_epsilon);
Date fromdate_epsilon = input_row.START_DATE;
globalMap.put("fromdate_epsilon", fromdate_epsilon);
Date todate_epsilon = input_row.END_DATE;
globalMap.put("todate_epsilon", todate_epsilon);
Integer load_key_epsilon = input_row.LOAD_KEY;
globalMap.put("load_key_epsilon", load_key_epsilon);
System.out.println(fromdate_epsilon);
I am passing the global variable values of the parent job to the child job's context values like Brand=((String)globalMap.get("brand_epsilon") on the ChildJob component.

As I proposed on Talend Community Forum, I strongly recommend you to leave date context variables as string and to convert the values as Oracle datatype using the to_date function into the where clause like this:
...
AND TRUNC(ACTIVITY_DATE) >= TO_DATE('" + context.FROMDATE + "', 'yyyy_mm-dd')"
...
date format is given here as an example.
TRF

Related

pyspark add int column to a fixed date

I have a fixed date "2000/01/01" and a dataframe:
data1 = [{'index':1,'offset':50}]
data_p = sc.parallelize(data1)
df = spark.createDataFrame(data_p)
I want to create a new column by adding the offset column to this fixed date
I tried different method but cannot pass the column iterator and expr error as:
function is neither a registered temporary function nor a permanent function registered in the database 'default'
The only solution I can think of is
df = df.withColumn("zero",lit(datetime.strptime('2000/01/01', '%Y/%m/%d')))
df.withColumn("date_offset",expr("date_add(zero,offset)")).drop("zero")
Since I cannot use lit and datetime.strptime in the expr, I have to use this approach which creates a redundant column and redundant operations.
Any better way to do it?
As you have marked it as pyspark question so in python you can do below
df_a3.withColumn("date_offset",F.lit("2000-01-01").cast("date") + F.col("offset").cast("int")).show()
Edit- As per comment below lets assume there was an extra column of type then based on it below code can be used
df_a3.withColumn("date_offset",F.expr("case when type ='month' then add_months(cast('2000-01-01' as date),offset) else date_add(cast('2000-01-01' as date),cast(offset as int)) end ")).show()

How to pass values in ForEach in Azure Data Factory?

I would like to create ForEach loop and need advice:
I have "Fetch" Lookup with "Select CustomerName, Country From Customers".
It return rows like "Tesla, USA" and "Nissan, Japan". There are total 10 rows.
I would like to run ForEach loop for 10 times and use CustomerName and Country value in pipeline.
ForEach settings values are current set: #activity('Fetch').output (something wrong here?)
I would like to create new Lookup inside ForEach. I would like in Lookup query "SELECT * FROM Table WHERE CustomerName = 'CustomerName' and Country = 'CountryName'"
Error of ForEach:
The function 'length' expects its parameter to be an array or a string. The provided value is of type 'Object'.
The Items property of the For Each activity should look something like this:
#activity('Fetch').output.value
You can then reference columns from your Lookup within the For Each activity using the item() syntax, eg #item().CustomerName. Remember, expressions in Azure Data Factory (ADF) start with the # symbol but you don't have to repeat it in the string.

How to extract first digit from a Integer in transformer stage in IBM DataStage?

I have an integer field coming and I want to extract the first digit from the field, how can I do it. I cannot cast the field since the data is coming from a dataset, is there a way to extract first digit from the transformer stage in IBM datastage?
Example:
Input:
ABC = 1234
Output: 1
Can anyone please help me with the same?
Thanks!
Use a transformer, define a stage variable as varchar and use this formula to get the substring
ABC[1,1]
Alternatively you can also convert your numeric value by using the DecimalToString
You CAN convert to string within the context of your expression, and back again if the result needs to be an integer.
AsInteger(Left(ln_jn_ENCNTR_DTL.CCH,1)
This solution has used implicit conversion from integer to string. It assumes that the value of CCH is always an integer.
I would say- if ABC has type int, you can define a stage variable of type char having length 1.
then you need to convert Number to string first.And use Left function to extract the first char.
Left(DecimalToString(ABC),1).
If you are getting ABC as string, you can directly apply left function.
You can first define a stage variable (name say SV) of varchar type (to convert input integer column into varchar) :
Stage variable definition
Now assign the input integer column to stage variable SV and derive output integer column as AsInteger(SV[1,1]) : Column definition
i.e. input integer => (Type conversion to varchar) Stage variable => Substring[1,1] and Substring Conversion to Integer using AsInteger.
DecimalToString is an implicit conversion, so all you need is the Left() function. Left(MyString,1)

Insert null value to date field in access using vba

Am trying to insert date from a table to another table using vba in access, But one of the date values from the table is blank. So while inserting the date to new table it shows Invalid use of null exception Run-time error 94.
If writing DAO code to clear a Date field, I found that I had to use "Empty". Null and "" won't work. So for field dtmDelivery (type Date), I had to use the following. strDelivery is just a string with the date in it.
Set rst = dbs.OpenRecordset("tblSomething", dbOpenDynaset)
If (strDelivery = "") Then
rst!dtmDelivery = Empty
Else
rst!dtmDelivery = strDelivery
End If
rst.Update
you can use the NZ() function to define a value that should be used instead of NULL. Internally a date is a float value where the number represents the days between 01.01.1900 and the date that should be stored. The part behind the decimal point represents hours/minutes (.25 for example is 6 AM). So you can use the NZ() funtion to replace NULL bei 0 (what would be interpreted as 01.01.1900).
Another solution would be to configure the target table in a way that it allows NULL values. You can do this easily in the design view of the table.
I think i figure it out , declare the variable to string. Then check whether value from the table is empty, then assign null to it.Other wise assign the value from the table.
for example
Dim newdate As String
Check whether the fetched values is null or not
If IsNull(rst![value]) Then
newdate = "null"
Else
newdate = rst![value]
End If
I encountered this problem and solved it this way.
SQL = "update mytable set mydatefield = '" & txtdate & "'"
On the txtdate on your form, put a date format to ensure date is either a real date or just blank.

DbContext.Database.SqlQuery<KeyValuePair<int, string>>("exec mySproc") not mapping

I can't figure out why this works
var blah = this.Database.SqlQuery<MyObjectWithTwoStringPropsNamedKeyAndValue>("exec mySproc").ToList();
and this doesn't
var blah = this.Database.SqlQuery<KeyValuePair<string,string>>("exec mySproc").ToList();
"mySproc" returns three records with two varchar columns aliased "Key" and "Value".
In the second line of code, I get a list of three KeyValuePairs, but both properties (Key and Value) are null for each item in the list.
That's because the KeyValuePair properties, Key and Value are readonly properties.
The values for Key and Value can only be set in the constructor, and not changed later.
SqlQuery tries to map the columns returned by the stored procedure, but cannot find properties in which to write them. The documentation doesn't state that the properties must be writeable, but it's clear that it won't use the parameterized constructor, but the properties.
The type can be any type that has properties that match the names of the columns returned from the query, or can be a simple primitive type.