How do I parameterize the where condition in a lookup activity query in azure data factory? I have created a pipeline parameter and tried to pass it to the lookup activity query as given below.
select max(dt) as dt from tab1 where col='#pipeline.parameters.parama1'
I have tried with quotes, without quotes, curly brackets, but still not firing. Any help would be appreciated.
Regards,
Sandeep
Official doc here: https://learn.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions
Expressions can also appear inside strings, using a feature called string interpolation where expressions are wrapped in #{ ... }.
Taking this into consideration, this may work for you:
select max(dt) as dt from tab1 where col=#{pipeline().parameters.param}
Hope this helped!
Related
I'm trying to add named parameters to a dataset query in a SSRS report (I'm using Report Builder), but I have had no luck discovering the correct syntax. I have tried #parameter, $1, $parameter and others, all without success. I suspect the syntax is just different for PostgreSQL versus normal SQL.
The only success I have had with passing parameters was based on this answer.
It involves using ? for every single parameter.
My query might look something like this:
SELECT address, code, remarks FROM table_1 WHERE date BETWEEN ? AND ? AND apt_num IS NULL AND ADDRESS = ?
This does work, but in the case of a query where I pass the same parameter to more than one part of the SELECT statement, I have to add the same parameter to the list multiple times as shown here. They are passed in this order, so adding a new parameter to an existing query results in having to reshuffle, and sometimes completely rebuild, the query parameters tab.
What are the proper syntax and naming requirements for adding named Parameters when using a PostgreSQL data source in SSRS?
From my comment, this is what it would look like with a regular join:
with inparms as (
select ? as from_date, ? as to_date, ? as address
)
select t.address, t.code, t.remarks
from inparms i
join table_1 t
on t.date between i.from_date and i.to_date
and t.apt_num is null
and t.address = i.address;
I said cross join in my comment because it is sometimes quicker when retrofitting somebody else's SQL instead of trying to untangle things (thinking of a friend who uses right join sometimes just to ruin my day).
I want to use a query in a copy job for my source in an Azure Data Factory pipeline together with a date function - here is the dummy query:
SELECT * FROM public.report_campaign_leaflet WHERE day="{today - 1d}"
I´ve found some documentation about dynamic content and some other stuff but no information on how to use date functions directly in a sql query.
Maybe someone has a hint for me?
Thanks & best,
Michael
Here is the possible solution for your problem.
In your copy activity, at the source side, you choose query in Use Query option, and then in the query box you write an expression
Here is the expression #concat('SELECT * FROM public.report_campaign_leaflet WHERE day=','"',formatDateTime(adddays(utcnow(),-1), 'yyyy-MM-dd'),'"')
formatDateTime function will just format the output of addDays(utcnow(),-1) into yyyy-MM-dd format
Again, you can have a parameter in your pipeline processDate for example, and to set this value from expression in trigger definition, and then just to call that parameter in the query. (suggestion)
You need to replace the double quote (") with two single quotes (''):
#concat('SELECT * FROM public.report_campaign_leaflet WHERE day=','''',formatDateTime(adddays(utcnow(),-1), 'yyyy-MM-dd'),'''')
Is there a way in the V2 Copy Activity to operate upon one of the input columns (of type string) with an expression? Before I load rows to the destination, I need to limit the number of characters in the column.
My hope was to simply switch from something like this:
"ColumnMappings": "inColumn: outColumn"
to something like this:
"ColumnMappings": "#substring(inColumn, 1, 300): outColumn"
If anyone can point me to where I can read-up on where & when string expressions can be used, I could use the guidance.
This is the official documentation on expressions and functions: https://learn.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions
And this is the documentation on mappings: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping
Also remember that if you are using a defined query in the copy activity, you can use sql functions like CAST([fieldName] as varchar(300)) to limit the amount of characters on a particular field.
Hope this helped!
When you don't have a SQL Source, but your destination is a SQL sink, you can use a Stored Procedure to insert your data into the final table. That way, you can define these kinds of transformations in the stored procedure. I don't think the Data Factory can handle these kinds of activities, it is more intended as an orchestrator.
Have a look here:
https://learn.microsoft.com/en-us/azure/data-factory/connector-sql-server#invoke-stored-procedure-from-sql-sink
I want to do something like this in OrmLite
SELECT *, COUNT(title) as titleCount from table1 group by title;
Is there any way to do this via QueryBuilder without the need for queryRaw?
The documentation states that the use of COUNT() and the like necessitates the use of selectRaw(). I hoped for a way around this - not having to write my SQL as strings is the main reason I chose to use ORMLite.
http://ormlite.com/docs/query-builder
selectRaw(String... columns):
Add raw columns or aggregate functions
(COUNT, MAX, ...) to the query. This will turn the query into
something only suitable for using as a raw query. This can be called
multiple times to add more columns to select. See section Issuing Raw
Queries.
Further information on the use of selectRaw() as I was attempting much the same thing:
Documentation states that if you use selectRaw() it will "turn the query into" one that is supposed to be called by queryRaw().
What it does not explain is that normally while multiple calls to selectColumns() or selectRaw() are valid (if you exclusively use one or the other),
use of selectRaw() after selectColumns() has a 'hidden' side-effect of wiping out any selectColumns() you called previously.
I believe that the ORMLite documentation for selectRaw() would be improved by a note that its use is not intended to be mixed with selectColumns().
QueryBuilder<EmailMessage, String> qb = emailDao.queryBuilder();
qb.selectColumns("emailAddress"); // This column is not selected due to later use of selectRaw()!
qb.selectRaw("COUNT (emailAddress)");
ORMLite examples are not as plentiful as I'd like, so here is a complete example of something that works:
QueryBuilder<EmailMessage, String> qb = emailDao.queryBuilder();
qb.selectRaw("emailAddress"); // This can also be done with a single call to selectRaw()
qb.selectRaw("COUNT (emailAddress)");
qb.groupBy("emailAddress");
GenericRawResults<String[]> rawResults = qb.queryRaw(); // Returns results with two columns
Is there any way to do this via QueryBuilder without the need for queryRaw(...)?
The short answer is no because ORMLite wouldn't know what to do with the extra count value. If you had a Table1 entity with a DAO definition, what field would the COUNT(title) go into? Raw queries give you the power to select various fields but then you need to process the results.
With the code right now (v5.1), you can define a custom RawRowMapper and then use the dao.getRawRowMapper() method to process the results for Table1 and tack on the titleCount field by hand.
I've got an idea how to accomplish this in a better way in ORMLite. I'll look into it.
Does anyone know how to use to_tsquery() function of postgresql in sqlalchemy? I searched a lot in Google, I didn't find anything that I can understand. Please help.
I hope it is available in filter function like this:
session.query(TableName).filter(Table.column_name.to_tsquery(search_string)).all()
The expected SQL for the above query is something like this:
Select column_name
from table_name t
where t.column_name ## to_tsquery(:search_string)
The .op() method allows you to generate SQL for arbitrary operators.
session.query(TableName).filter(
Table.c.column_name.op('##')(to_tsquery(search_string))
).all()
For these type of arbitrary queries, you can embed the sql directly into your query:
session.query(TableName).\
filter("t.column_name ## to_tsquery(:search_string)").\
params(search_string=search_string).all()
You should also be able to parameterize t.column_name, but can't see the docs for that just now.
This may have been added recently but worth adding as a more standard solution.
query.filter(Model.attribute.match('your search string'))
does it for you as it looks for the right operation available for your dialect.
See the official docs:
https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#full-text-search
Of course, this assumes the table you are querying is a view built with a to_tsvector attribute to apply the ## operation to.
My fifty cents in 2021, following the docs
None of the previous answers mention how to cast the text column as postgres: to_tsvector('english', column). I have a text column indexed as tsvector. And this is the way:
select(mytable.c.id).where(
func.to_tsvector('english', mytable.c.title )\
.match('somestring', postgresql_regconfig='english')
)
In my case, I didn't want to use to_tsquery as ".match" forces. A more intuitive way is to use websearch_to_tsquery when you have a search input like stackOverflow. So I did a mix from jd response.
I finally applied it as a .filter() the following statement:
func.to_tsvector('english', Table.column_name)\
.op('##')(func.websearch_to_tsquery("string to search",
postgresql_regconfig='english'))
I think this is the general formula and it applies to to_tsquery too.