Talend save Oracle Sequence number to context variable - talend

I've the following problem: I want to get the next sequence number from Oracle and save it into my context variable.
I have a working tOracleInput_1 (tLogRow shows correct output:)
.-------------------.
|#1. tLogRow_1--tLogRow_1|
+----------+--------+
| key | value |
+----------+--------+
| datei_id | 264032 |
+----------+--------+
Now I'd like to write this value ('datei_id') to 'context.dateiId'. For this I've made a main-row to tJava and in tJava:
context.dateiId = ((String)globalMap.get("tOracleInput.datei_id"));
The value of context.dateiId is now
null
Can anyone help me with this issue?

use tJavaRow in place of tJava and inside it use context.dateiId = input_row.schemacolumnName;
where schemacolumnName is your column name from tOracleInput schema which is mapped to tJavaRow via main flow

Related

Is there a way to put a counter column when doing Get commands in PowerShell?

I need to extract a Get command results into a CSV. The order column should be automatically generated upon a call and give each object its counter upon its appearance in the list. Would this be possible?
For example, when I'd do something like
Get-VMHost | Select #{N="Order";E={$suborder++}}, Name, Version, Build | Export-Csv -path .\smth.csv
I would like to get a result like
Order Name Version Build
----- ---- ------- -----
1 servername1 1.1.1 11111111
2 servername2 1.1.1 11111111
3 servername3 1.1.1 11111111
Would this be possible without using an array?
There are two problems with the current approach:
Unary ++ doesn't output anything by default
Select-Object runs property expressions in their own scope, so you're not actually updating $suborder, you're creating a new local variable every time.
The first problem can be solved by wrapping the operation in the grouping operator (...):
... | Select #{N="Order";E={($suborder++)}}, ...
The second problem can be solved by obtaining a reference to an object that exposes the suborder value as a property.
You can either use a hashtable or a custom object to "host" the counter:
$counter = #{ suborder = 1 }
... | Select #{N="Order";E={($counter['suborder']++)}}
... or you can make PowerShell wrap the original variable in a PSRef-wrapper by using the [ref] keyword:
$suborder = 1
... | Select #{N="Order";E={(([ref]$suborder).Value++)}}

How to emulate the array_join() method in spark 2.2

For example, if I have a dataframe like this:
|sex| state_name| salary| www|
|---|------------------|-------|----|
| M| Ohio,California| 400|3000|
| M| Oakland| 70| 300|
| M|DF,Tbilisi,Calgary| 200|3500|
| M| Belice| 200|3000|
| m| Sofia,Helsinki| 800|7000|
I need to concatenate as a String the comma separated values in the "state_name" column with a delimiter specified by me. Also, I need to put a string at the end and the beginning of the generated string (the opposite of a strip() method or function).
For example, if I want an output like this:
|cool_city |
|--------------------------------|
|[***Ohio<-->California***] |
|[***Oakland***] |
|[***DF<-->Tbilisi<-->Calgary***]|
|[***Belice***] |
|[***Sofia<-->Helsinki***] |
The solution that I've already coded with Spark 3.1.1 is this:
df.select(concat(lit("[***"),
array_join(split(col("state_name"),","),"<-->"),lit("***]")).as("cool_city")).show()
The problem is that the computer where this will be running is using Spark 2.1.1 and the array_join() method isn't supported in this version (it's a pretty big project and upgrading the Spark version isn't over the table). Im pretty new using scala/spark and I don't know if there's another function that could help me emulating the array_join() use or if someone knows where to find the way to code a UDF with the same usefulness.
I would greatly appreciate your help!
I don't know Scala, but try this:
df.select(concat(lit("[***"),
concat_ws("<-->", split(col("state_name"), ",")),
lit("***]")).as("cool_city")).show()
UPDATE
Avoiding column split:
df.select(concat(lit("[***"),
regexp_replace(col("state_name"), ",", "<-->"),
lit("***]")).as("cool_city")).show()

store elements to hashet from file scala

i am playing a little bit with scala and i want to open a text file, read each line and save some of the fields in a hashset.
The input file will be something like this:
1 2 3
2 4 5
At first, i am just trying to store the first element of each column to a variable but nothing seems to happen.
My code is:
var id = 0
val textFile = sc.textFile(inputFile);
val nline = textFile.map(_.split(" ")).foreach(r => id = r(0))
I am using spark because i want to process bigger amount of data later, so i'm trying to get used to it. I am printing id but i get only 0.
Any ideas?
A couple of things:
First, inside map and foreach you are running code out on your executors. The id variable you defined is on the driver. You can pass variables to your executors using closures, but not the other way around. If you think about it, when you have 10 executors running through records simultaneously which value of ID would you expect to be returned?
Edit - foreach is an action
I mistakenly called foreach not an action below. It is an action that just lets your run arbitrary code against your rows. It is useful if you have your own code to save the result to a different data store for example. foreach does not bring any data back to the driver, so it does not help with your case.
End edit
Second, all of the spark methods you called are transformations, you haven't called an action yet. Spark doesn't actually run any code until an action is called. Instead it just builds a graph of the transformations you want to happen until you specify an action. Actions are things that require materializing a result either to provide data back to the driver or save them out somewhere like HDFS.
In your case, to get values back you will want to use an action like "collect" which returns all the values from the RDD back to the driver. However, you should only do this when you know there aren't going to be many values returned. If you are operating on 100 million records you do not want to try and pull them all back to the driver! Generally speaking you will want to only pull data back to the driver after you have processed and reduced it.
i am just trying to store the first element of each column to a
variable but nothing seems to happen.
val file_path = "file.txt"
val ds = ss.read.textFile(file_path)
val ar = ds.map(x => x.split(" ")).first()
val (x,y,z) = (ar(0),ar(1),ar(2))
You can access the first value of the columns with x,y,z as above.
With your file, x=1, y=2, z=3.
val ar1 = ds.map(x => x.split(" "))
val final_ds = ar.select($"value".getItem(0).as("col1") , $"value".getItem(1).as("col2") , $"value".getItem(2).as("col3")) // you can name the columns as like this
Output :
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 1| 2| 3|
| 2| 4| 5|
+----+----+----+
You can run any kind of sql's on final_ds like a small sample below.
final_ds.select("col1","col2").where(final_ds.col("col1") > 1).show()
Output:
+----+----+
|col1|col2|
+----+----+
| 2| 4|
+----+----+

Using DBFit variables for other Fixtures

Is it possible to use a DbFit variable in another fixture? For example, if I create a query in dbfit with a variable <<firstname as follows:
SELECT * FROM System.Columns Where ColumnName= 'FirstName' | <<firstname |
Is it possible to include a reference to that page in another suite of tests that use a column fixture or RestFixture and call the variable name to check against the database?
In the RestFixture, I'm checking a database column name against a node so can I do something like:
| GET | /resources/6 | | //${firstname} |
Where I'm checking if an xml node named FirstName exists in the database.
With fitSharp (.NET), it is. With Java, I'm not sure.
I was struggling with the same issue, imo it's hard to find any good documentation. I stumbled upon the answer skimming some arbitrary tests somewhere.
You can access the symbol in RESTFixture using %variablename%.
RestFixture
| GET | /resources/6 | | //%firstname% |

SSRS Variable Expression - Sum, Sum, Scope?

I'm trying to write an expression for a variable (not parameter), so that I can use/reference it to do a calculation in another textbox. I have multiple datasets, and I need the SUM(SUM(Fields!amount.Value)) for each of these datasets. I am then going to use these numbers in another textbox, adding them with each other. I need some assistance with the syntax. I am able to use SUM by itself, without an issue. For example, this works fine:
=SUM(Fields!Amount.Value, "DataSet1")
But I get an error when trying to amend it to the following (which is what I actually need):
=SUM(SUM(Fields!amt.Value, "Acctrange_90300_90399_InterestExpenses"))
I get an error saying
"The variable expression for the report 'body' uses an aggregate
expression without a scope. A scope is required for all aggregates
used outside of a data region unless the report contains exactly one dataset."
I have a hunch that there's a problem with my syntax/parantheses placement. Any suggestions?
could you please provide a detailed example with maybe sample data and an expected solution?
Because I do not understand why you want to SUM(SUM()). The inner SUM() would result in a single integer value, why would you want to do another SUM on just a single value. even if it worked, it would just be the same value. I apologize if I understood the question wrongly, i can't comment, so i am answering, but I want to know clearly what you are looking for. I can understand if you are trying to do SUM( SUM(), SUM(), SUM()...). As in, sum of sums.
Sorry again for answering just to inquire more info, but I can't see a comment option.
UPDATE:
ok now i think you have something like
| a | b | c | d |
---------------------------
1 | * | * | * | * |
---------------------------
2 | * | * | * | * |
---------------------------
3 | * | * | * | * |
---------------------------
total | w | x | y | z |
so SUM(Fields!a.Value) would give you w, and SUM(Fields!b.Value) would give you x, and so on.
and you want w+x+y+z? If that is so, then you can do this:
add a calculated field to your dataset to calculate the row totals like
{ (a1+b1+c1+d1),(a2+b2+....), .... },
and then from your variable, call
SUM(Fields!CalculatedField.Value).
for above example, you can give the calculated field an expression as:
= CInt(Fields!a.Value)+CInt(Fields!b.Value)+CInt(Fields!c.Value)+CInt(Fields!d.Value)
this would make each entry in the calculated field as sum of each entry in all fields.
So sum of calculated field would give you your answer.
Hope thats what you wanted. Otherwise, well I tried understanding the problem. :)
Outer SUM needs a scope too. Try this:
=SUM(SUM(Fields!amt.Value,"Acctrange_90300_90399_InterestExpenses"),"Acctrange_90300_90399_InterestExpenses")
I expect this will also fail--in some new & different way--but that error will get you closer to a solution.
When SSRS wants to give me headaches like this, I cheat. I put the inner calculation in a hidden text box or name the box that displays it if I want it to show, then refer to it by name. So if I had the sum of the first column in a text box called txtFirstColumnSum, and the second column sum in the text box called txtSecondColumnSum, I'd use:
=cdec(ReportItems!txtFirstColumnSum.Value)+ cdec(ReportItems!txtSecondColumnSum.Value)...
Another way to do it is using the built-in scopes, but writing custom code to handle the math between the fields.
For example if I want the percentage of the current YTD sales over the same period the prior year, and also want to check for divide by zero, I can do it in an expression, but I use it on several levels, so instead I made custom code:
Public Function DeltaPercent(ByVal currentAmount as decimal,ByVal pastAmount as Decimal) as Decimal
Dim difference as decimal
Dim result as decimal
if pastAmount<>0 then
difference=currentAmount-pastAmount
result=difference/pastamount
else
result=0
end if
return result
End Function
Then to get the % change, I call it:
=Code.DeltaPercent(Sum(Fields!SalesYTD.Value, "SalesTerritory"),Sum(Fields!SalesPY1TD.Value, "SalesTerritory"))
One tip I wish I'd known sooner: If you use the syntax
ReportItems("txtFirstColumnSum").Value
you have to make sure you type it right. If you use the bang syntax (!) above, you get intellisense.