SAP Table Connector - How to querying SAP table using RFC from Azure Synapse - azure-data-factory

We are using SAP Table Connector on Azure Synapse to extract SAP tables. However, we would like to filter the data in the copy activity.
I tried using the RFC table options using the COLUMN EQ 'SOME VALUE' pattern,
This worked, but we would like to apply more filters like "AND" and "OR", like this "COLUMN EQ 'SOME VALUE' AND COLUMN1 EQ 'SOME VALUE' ", I don't know if this is possible, or if there are better ways to do this type of filter.
How can we overcome this issue?
Thanks for listening.
I tried with AND, &&, space, comma, but none worked, I think this might not be possible

I feel like it should be possible to AND conditionals, but so far everything indicates it is not supported by default. A custom function module would be able to support it.

Related

Talend - Output data to Snowflake table with spaces in Field Names

I have a very specific requirement to output data to a Snowflake table but the field names must have spaces in them. Snowflake appears to handle this okay, but I'm unsure how Talend will as I understand Java doesn't allow it. Can anyone help?
Also are there other tools that won't handle spaces in field names (i.e. R or Python) so we would be restricting use of the warehouse if we did that?

Creating spectrum table in matillion for csv file with comma inside quotes

I have a scenario for creating spectrum table in redshift using matillion.
my CSV file data is like this:-
column1,column2,column3
abc,"qwety,pqr",xyz
but in spectrum table i am seeing data
as
column1 column2 column3
abc qwerty pqr
Matillion is not taking quotes value as one.
can you please suggest how to achieve this using matillion's EXTERNAL TABLE component.
Basically you would like to specify a quote parameter for your CSV data.
Redshift has 2 ways of specifying external tables (see Redshift Docs for reference):
using the default built-in SerDes and properties like ROW FORMAT DELIMITED, FIELDS TERMINATED BY
explicitly specifying a SerDe with ROW FORMAT SERDE, WITH SERDEPROPERTIES
I don't think it's possible to specify a quote parameter using the built-in SerDes.
It is possible to specify them using org.apache.hadoop.hive.serde2.OpenCSVSerde (look here for details on it's properties), but beware that there are know problems with it, as one described in this SO question.
Now for Metillion:
I have never used Matillion, but looking at their Redshift External Table documentation page, looks like it's only possible to specify the FORMAT and the FIELD TERMINATOR, but not to specify a SerDe and it's properties, hence it's not possible to specify the quote parameters for the external table - unless there are some undocumented means to specify a custom SerDe.
Personal note:
We have experienced many problems with ingesting data stored as CSV, and we basically try to avoid it. There's no standard for CSV, each tool implements it's own version of support for it, and it's very difficult convince all your tools to see data the same way.

How to nest variables in grafana?

I have a simple Custom variable called route with e.g. this value:
/foo/bar,/foo/baz,/foo/baz/foo
I'm trying to map these values to some more understandable values, e.g. Custom route_names:
bar,baz,foo
Searching on google resulted in people doing nested variables, but whatever I try in Grafana 5.3.4, I can't get it to work. If I do a Query variable and use -- Grafana -- as source, I don't know what to put in the query field. route.* didn't do anything, $route neither.
What is the correct way of selecting a value from one variable and map it to the other? I.e. What is the query language being used when selecting -- Grafana -- as datasource?
As a side note, I have two datasources at the moment, my actual data source where I get my graph data from and -- Grafana --.
There are correct answers on the first floor. solve "key/value pairs" by SELECT 'txt1' AS __text, 'value1' AS __value UNION SELECT 'txt2' AS __text, 'value2' AS __value
This is not possible with Custom template variables (unless smth changed in recent Grafana versions). It can be done with variables coming from mysql, postgres and clickhouse datasource queries. See examples in https://community.grafana.com/t/key-value-style-for-custom-template-variable-configuration-and-usage/3109 thread. Can't tell about this feature support in other datasource types.

SSRS Dynamic Columns Display

Using SQL 2008R2
I have a need to create a SSRS report where the user can specify the Columns returned AND the order in which they are returned. Dynamic data and ordering.
Example:
Available Columns A,B,C,D,E
User specifies they want to see: C,D,A
No issue on the data side, I'm using a stored procedure and can handle this no issue.
On the SSR side, I've seen similar question mention using a "matrix".
However I'm looking for opinions on the best approach on how to handle this on the SSRS Side. What is the best way to handle dynamic number of returned columns and dynamic ordering of columns..
As has already been mentioned, SSRS is not the way to go for this.
If the order of the columns were not customizable then you could handle column visibility using SSRS expressions but presenting the columns in a dynamic order is not easy in SSRS.
For that kind of thing you could use Excel's pivot table functionality, use a 3rd party .NET solution like MVC or build some home-grown ASP.net solution.
Try this:
1. In SSRS, create parameters ColumnA and ColumnB
Create your dataset---don't directly type your query, use expression (the fx button)
In the expression, you can write your query like this:
="SELECT " + Parameters!ColumnA.Value + "," + Parameters!ColumnA.Value + " FROM Table"
You can solve your dynamic ORDER BY problem by the same way.

How to use Pentaho Data Integration to copy columns between tables

I thought this would be an easy task, but since I am new to PDI, I could not
find out so far which transform to choose to accomplish the following:
I am using Pentaho Data Integration (former Kettle), Community Edition, to map/copy values from one table ('tasksA') of one database 'A' to another table
'tasksB' in another database B. tasksA has a column 'description' and I want
to copy these values to the column 'taskName' in 'tasksB'.
Furthermore, I have to copy each value of 'description' several times, since
in 'tasksB', there are multiple lines for each value in 'taskName'.
Maybe this would be possible by direct SQL, but I wanted to try whether
I can define this more readable with PDI, especially because in the next step I will have to extend it to other tables involved.
So I have to tell which value of
'description' has to be mapped onto which value of 'taskName' and that in
every tuple containing this value (well, sounds like a WHERE clause...) in the column 'taskName' it should be replaced.
My first experiments with the 'Table input' and 'Table output' steps
did not work when I simply drew a hop between them and modifying the 'database
fields' tab of the 'Table output' step, which generated 'drop column' statements
in the resulting SQL which is not what I want. I don't want to modify the schema, just copy the values.
Would be great if someone could point me to the right steps/transforms needed,
I worked through the first examples from the Pentaho Wiki and have got the 'Pentaho Kettle Solutions' book of Casters et al. but could find out how
to do solve this. Many thanks in advance for any help.
If I got this right, you should use the Table Input connected to a "Insert/Update" step.
On the Insert/Update step you need to inform the keys from tasksA where should be looked up on tasksB. Then define which fields on tasksB should be updated: description (as stream field) -> taskName (as the table field).
Keep in mind that if this key is not found, a row will be inserted on tasksB. If it is not what you plan, you'll need to build something like: Table Input -> Database Lookup -> Filter Rows -> Insert/Update
#RFVoltolini has a good answer. Alternatively you could go
Table Input -> Update
And connect the error output to something else like a Text file output.