Aginity Netezza macro containing a list - macros

I would like to put a list of names in my Aginity Netezza macro. For instance, I would like to be able to repeatedly use the list ("Adam", "Bill", "Cynthia", "Dick", "Ella", "Fanny") in my future queries, e.g. in WHERE clauses.
My questions are:
(1) Is there a limit to how many characters I can put inside the "Value" window of the Query Parameters Editor?
(2) Is there a way to make this work without using a macro? For instance, predefining this list somewhere?

I would put the list into a (temporary) table, and simply join to it when necessasary:
Create temp table names as
Select ‘Adam’::varchar(50)
Union all Select ‘Bill’::varchar(50)
Union all Select ‘Cynthia’::varchar(50)
Union all Select ‘Dick’::varchar(50)
Union all Select ‘Ella’::varchar(50)
Union all Select ‘Fanny’
;
Select x.a,x.b
from x
where x.name in (select * from Names)
;
Select
case
when x.name in (select * from Names)
then ‘Special’
Else ‘Other’
End as NameGrp,
Count(*) as size,
Sum(income) as TotalIncome
Group by NameGrp
Order by size desc
;
Alternatively netezza has an extension toolkit that enables ARRAY data types, but especially the first query will not perform well if you use it for that purpose. Interested? See here: https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.sqltk.doc/c_sqlext_array.html or google for examples

Related

Header not coming while exporting Db2 select query results to CSV

I'm trying to execute the below query to export the results to CSV. I'm able to export the data to CSV but the headers were missing in the file. Is there anyway that we can achieve this? Im executing the file in the form "db2 -tvmf D:\Db.sql"
connect to ****** user ***** using ******
export to "D:\Vikas.csv" OF DEL MESSAGES
select
'ROW_NUM',
'DETAIL_TYPE_CD',
'ADMIN_FEES_TICKET',
'ADMINISTRATIVE_FEES',
'BASE_RENT',
'CITATIONS',
'COLLECTION_REPO_FEES',
'DESC',
'EFFECTIVE_DATE',
'LATE_CHARGE',
'MISC_FEE',
'STATUS_CD',
'ROW_ID',
'ROW_ID',
'BUILD',
'REVERSE_FLG',
'NSF_FLG',
'PR_CON_ID',
'PROC_DATE',
'PROPERTY_TAX',
'REGISTRATION_FEES',
'REPAIR_FEES',
'SALES_TAX',
'TERMINATION_FEES',
'TOTAL_TRANS',
'TRANSACTION_TYPE'
from sysibm.sysdummy1
UNION ALL (select
T1.ROW_NUM,
T5.DETAIL_TYPE_CD,
T1.ADMIN_FEES_TICKET,
T1.ADMINISTRATIVE_FEES,
T1.BASE_RENT,
T1.CITATIONS,
T1.COLLECTION_REPO_FEES,
T1.DESC,
T1.EFFECTIVE_DATE,
T1.LATE_CHARGE,
T1.MISC_FEE,
T2.STATUS_CD,
T4.ROW_ID,
T3.ROW_ID,
T2.BUILD,
T1.REVERSE_FLG,
T1.NSF_FLG,
T2.PR_CON_ID,
T1.PROC_DATE,
T1.PROPERTY_TAX,
T1.REGISTRATION_FEES,
T1.REPAIR_FEES,
T1.SALES_TAX,
T1.TERMINATION_FEES,
T1.TOTAL_TRANS,
T1.TRANSACTION_TYPE
FROM
SIEBEL.LSE_INPHIST_VIEW T1
LEFT OUTER JOIN SIEBEL.S_ASSET T2 ON T1.ACCOUNT_NUM = T2.ASSET_NUM
LEFT OUTER JOIN SIEBEL.S_ASSET_CON T3 ON T2.ROW_ID = T3.ASSET_ID AND
T3.RELATION_TYPE_CD = 'Obligor'
LEFT OUTER JOIN SIEBEL.S_ASSETCON_ADDR T4 ON T3.ROW_ID =
T4.ASSET_CON_ID AND T4.USE_TYPE_CD = 'Bill To'
LEFT OUTER JOIN SIEBEL.S_PROD_INT T5 ON T2.PROD_ID = T5.ROW_ID
WHERE
(T1.ACNT_ID = '01003501435'))
ORDER BY
T1.ACNT_ID DESC,T1.PROC_DATE DESC WITH UR
I have included the updated query now in the post.
The Db2-LUW export command lacks the ability to add columns headers to the output file. It only exports whatever is in the SELECT statement.
So when you want to have column-headers in the CSV file you have different options.
One way to do it (when there is no order by) is to make the SELECT statement into a UNION of two queries, the first query returns one row which is the list of column names, then union this with your real query. It means you must hand-craft the column-names of the first query to match the real second query. In your case for example it might look like:
SELECT 'row_num', 'detail_type_cd', ....
from sysibm.sysdummy1
UNION
SELECT t1.ROW_NUM, T5.DETAIL_TYPE_CD, ...
(you have to manually make the column-names , put them in single-quotes etc. But if you want Db2 to work out the column names you can use a technique like here ).
If you have an order by you can run two separate export commands (i.e. no union) outputting to two separate output files, and then use operating system functions to concatenate the output files like this:
export to headers.csv select 'colname1','colname2'...from sysibm.sysdummy1;
export to data.csv select ...
-- for MS-windows
!copy /a headers.csv + data.csv data_with_headers.csv ;
Another (possibly simpler) way to do it , with v11.5 (and higher) versions of Db2-LUW , is to not use the export command, but instead to create an external table, which lets you specify an option includeheader on among many other options for CSV files. You can search this site for examples, and reference the documentation.

Using a list as replacement for singular patterns in regexp_replace

I have a table that I need to delete random words/characters out of. To do this, I have been using a regexp_replace function with the addition of multiple patterns. An example is below:
select regexp_replace(combined,'\y(NAME|001|CONTAINERS:|MT|COUNT|PCE|KG|PACKAGE)\y','', 'g')
as description, id from export_final;
However, in the full list, there are around 70 different patterns that I replace out of the description. As you can imagine, the code if very cluttered: This leads me to my question. Is there a way to put these patterns into another table then use that table to check the descriptions?
Of course. Populate your desired 'other' table with what patterns you need. Then create a CTE that uses string_agg function to build the regex. Example:
create table exclude_list( pattern_word text);
insert into exclude_list(pattern_word)
values('NAME'),('001'),('CONTAINERS:'),('MT'),('COUNT'),('PCE'),('KG'),('PACKAGE');
with exclude as
( select '\y(' || string_agg(pattern_word,'|') || ')\y' regex from exclude_list )
-- CTE simulates actual table to provide test data
, export_final (id,combined) as (values (0,'This row 001 NAME Main PACKAGE has COUNT 3 units'),(1,'But single package can hold 6 KG'))
select regexp_replace(combined,regex,'', 'g')
as description, id
from export_final cross join exclude;

IBM db2 union query bad results

When I execute this query in SQL Server which calls to IBM,
Select * from openquery(ibm,'
Select COST_AMT,'Query1' as Query
from table
where clause
with ur;
')
union
Select * from openquery(ibm,'
Select COST_AMT,'Query2' as Query
from table
different where clause
with ur;
')
I get different results in the union query than when I execute them separately and bring the results in together. I have tried the union query inside the openquery so I believe this is an IBM thing. The results appear to be a distinct selection of COST_AMT sorted by lowest to highest.
ie:
1,Query1
2,Query1
3,Query1
1,Query2
2,Query2
3,Query2
but the data is actually like this:
1,Query1
1,Query1
1,Query1
2,Query1
2,Query1
3,Query1
1,Query2
1,Query2
1,Query2
2,Query2
2,Query2
3,Query1
Am I missing something about the ibm union query? I realize I could sum and get the answer, (which is what I plan no doing) but I want to know more about why this is happening.
This has nothing to do with "ibm" or "db2" -- the SQL UNION operator removes duplicates. To retain duplicates use UNION ALL.

Postgres: Distinct but only for one column

I have a table on pgsql with names (having more than 1 mio. rows), but I have also many duplicates. I select 3 fields: id, name, metadata.
I want to select them randomly with ORDER BY RANDOM() and LIMIT 1000, so I do this is many steps to save some memory in my PHP script.
But how can I do that so it only gives me a list having no duplicates in names.
For example [1,"Michael Fox","2003-03-03,34,M,4545"] will be returned but not [2,"Michael Fox","1989-02-23,M,5633"]. The name field is the most important and must be unique in the list everytime I do the select and it must be random.
I tried with GROUP BY name, bu then it expects me to have id and metadata in the GROUP BY as well or in a aggragate function, but I dont want to have them somehow filtered.
Anyone knows how to fetch many columns but do only a distinct on one column?
To do a distinct on only one (or n) column(s):
select distinct on (name)
name, col1, col2
from names
This will return any of the rows containing the name. If you want to control which of the rows will be returned you need to order:
select distinct on (name)
name, col1, col2
from names
order by name, col1
Will return the first row when ordered by col1.
distinct on:
SELECT DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for ORDER BY (see above). Note that the “first row” of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first.
The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). The ORDER BY clause will normally contain additional expression(s) that determine the desired precedence of rows within each DISTINCT ON group.
Anyone knows how to fetch many columns but do only a distinct on one column?
You want the DISTINCT ON clause.
You didn't provide sample data or a complete query so I don't have anything to show you. You want to write something like:
SELECT DISTINCT ON (name) fields, id, name, metadata FROM the_table;
This will return an unpredictable (but not "random") set of rows. If you want to make it predictable add an ORDER BY per Clodaldo's answer. If you want to make it truly random, you'll want to ORDER BY random().
To do a distinct on n columns:
select distinct on (col1, col2) col1, col2, col3, col4 from names
SELECT NAME,MAX(ID) as ID,MAX(METADATA) as METADATA
from SOMETABLE
GROUP BY NAME

Postgres SELECT values where only some columns respect WHERE clause

I have a table that I wish to select from. I want to select the same column twice, once with some date based filtering in the WHERE clause, and again without the filtering. How can I go about doing this?
Thanks
Use a UNION query, possibly with a CTE.
You haven't provided table definitions so I can't provide real SQL. You're looking for something like this:
SELECT *
FROM thetable
WHERE ...datefilter ...
UNION ALL
SELECT *
FROM thetable
WHERE ... otherfilter...;
You may find common table expressions ("WITH" queries) useful too.