We know we can write a function like
select avg val by category from tab
But what if I need to write a complicated customized functions like
select myfunc by category from tab
where here myfund will compute with multiple columns in the tab. for example inside myfunc I might do another layer of select by, might do some filtering, etc. As a basic example how do I wrap below of a+b+c+d
select a+b+c+d by category from tab
inside a myfunc, where it has visibility into columns a, b, c, and d, and will do some manipulation with them?
You can replace avg quite easily with your own function like so:
select {[a;b;c;d]a+b+c+d}[a;b;c;d] by category from tab
If you want to do it row by row use each-both '
select {[a;b;c;d]a+b+c+d} ' [a;b;c;d] by category from tab
Could you provide an example of what you are trying to achieve with the additional by/filtering inside the function? Doesn't seem to me like the best approach
You can pass the columns into your function in a tabular format, e.g:
q)t:([]col1:10?`a`b`c;col2:10?10;col3:10?1f;col4:10?.z.D)
q)select {break}[([]col2;col3;col4)] by col1 from t
'break
[1] {break}
^
q))x
col2 col3 col4
-------------------------
9 0.5785203 2008.02.04
7 0.1959907 2003.07.05
8 0.6919531 2007.12.27
If you're going to use all columns inside of your functions then another approach is to group the table into subtables and run a function for each subtable:
func each t group t`col1
Related
I have a table that I need to delete random words/characters out of. To do this, I have been using a regexp_replace function with the addition of multiple patterns. An example is below:
select regexp_replace(combined,'\y(NAME|001|CONTAINERS:|MT|COUNT|PCE|KG|PACKAGE)\y','', 'g')
as description, id from export_final;
However, in the full list, there are around 70 different patterns that I replace out of the description. As you can imagine, the code if very cluttered: This leads me to my question. Is there a way to put these patterns into another table then use that table to check the descriptions?
Of course. Populate your desired 'other' table with what patterns you need. Then create a CTE that uses string_agg function to build the regex. Example:
create table exclude_list( pattern_word text);
insert into exclude_list(pattern_word)
values('NAME'),('001'),('CONTAINERS:'),('MT'),('COUNT'),('PCE'),('KG'),('PACKAGE');
with exclude as
( select '\y(' || string_agg(pattern_word,'|') || ')\y' regex from exclude_list )
-- CTE simulates actual table to provide test data
, export_final (id,combined) as (values (0,'This row 001 NAME Main PACKAGE has COUNT 3 units'),(1,'But single package can hold 6 KG'))
select regexp_replace(combined,regex,'', 'g')
as description, id
from export_final cross join exclude;
I'm trying to pass multiple integer values to a subreport, in order to use the following SQL request :
SELECT *
FROM Table as T
WHERE Code IN (#Code)
I want to pass 3 integer values to #Code: 1,2 and 3
I tried to use various combination of Split() and Join(), but none worked.
You don't need to do anything. If your parameter is set to be multi-value and takes it's values from a query or list of integers then SSRS will automatically inject a comma separated list of values into your main dataset query.
In your case if values 1, 2 & 3 were selected and you main dataset looked like your example
SELECT *
FROM Table as T
WHERE Code IN (#Code)
then what actually gets passed to the server would be this..
SELECT *
FROM Table as T
WHERE Code IN (1,2,3)
There is no need to do JOINS or SPLITS and no need to change dataset parameters. It will just work.
When I use a multi-value parameter in SSRS I have to use a table-valued function in the query. Essentially, it turns your parameter into a table that you can INNER JOIN on. For example:
SELECT *
FROM Table as T
INNER JOIN tablevaluefuntion(#Code,',') as P--the ',' is the delimiter from your list
WHERE t.code = p.value
Somehow, I can only find examples that show how to add one column.
So I have written this code, which works, but I know there is a much better way to do this:
table t already exists with columns filled with data, and I need to add new columns that are initially null.
t: update column1:` from t;
t: update column2:` from t;
t: update column3:` from t;
t: update column4:` from t;
I tried making it a function:
colNames:`column1`column2`column3`column4;
t:{update x:` from t}each colNamesList;
But this only added one column and called it x.
Any suggestions to improve this code will be greatly appreciated. I have to add a lot more than just 4 columns and my code is very long because of this. Thank you!
Various ways to achieve this....
q)newcols:`col3`col4;
q)#[tab;newcols;:;`]
col1 col2 col3 col4
-------------------
a 1
b 2
c 3
Can also specify different types
q)#[tab;newcols;:;(`;0N)]
col1 col2 col3 col4
-------------------
a 1
b 2
c 3
Or do a functional update
q)![`tab;();0b;newcols!count[newcols]#enlist (),`]
`tab
i have doubt,
select *
from
(
select *
(
select User_Id,User_Name,Password
from <table> T
where IsActive = 1
) k
) m
in this case, is it required to mention column names in other 2 select statements,
mentioning columns is always better than keeping *
but,what is the use actually in above top 2 selects as we are getting selected columns from derived tables..
There's no need to mention each column instead of doing SELECT * FROM. However, if you don't need all columns you can optimize by selecting only the columns you need: SELECT a, b, c FROM.
There's no value added or optimization in having two nested SELECT * without any sort of calculation. Here's an article on Transact-SQL Derived Tables where I recommend you check out the Advantages of SQL derived tables section. There's a good example in there.
Is it possible to use the names of the actual columns for the order by clause?
I am using a view to let a client use a reporter writer (Pentaho) and this would make things easier on them.
To clarify, I want to put the results in alphabetical order of the column names themselves. I want to sort the data using the columns, not the data IN the columns.
In PostgreSQL you can try:
SELECT column_name, data_type FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME =
'my_table' ORDER BY column_name;
If what you mean is to change the order of the columns themselves according to their names (that would make sense only if you are using SELECT *, I guess), I'm afraid that's not possible, at least not straightforwardly. And that sounds very unSQL, I'd say...
Sure, you can order by column name, column alias, or column position:
select a, b from table order by b;
select a as x, b as y from table order by x, y;
select a, b from table order by 1;
You can create the view with the columns in any order you like. Then SELECT * FROM your_view queries will return the columns in the order specified by the view.