Does ungroup works on a table with multiple columns to ungroup? - kdb

So I have the following table :
q)flip`col1`col2`col3!((enlist`na;(`na`emea);`na);`test1`test2`test3;(enlist`uat;enlist`prd;(`uat`prd`dr)))
col1 col2 col3
--------------------------
,`na test1 ,`uat
`na`emea test2 ,`prd
`na test3 `uat`prd`dr
can I use ungroup on this table ?

Short answer is no - the first line of the documentation for ungroup https://code.kx.com/q/ref/ungroup/ States that "Where x is a table, in which some cells are lists, but for any row, all lists are of the same length"
your second row in the table contains lists in col1 and col3 but these are of different lengths.
ungroup will work on the first and last rows of your table as these contain lists and the cells which are lists are of the same length.
q)ungroup 1#t
col1 col2 col3
---------------
na test1 uat
q)ungroup -1#t
col1 col2 col3
---------------
na test3 uat
na test3 prd
na test3 dr
q)(1#t),-1#t
col1 col2 col3
----------------------
,`na test1 ,`uat
`na test3 `uat`prd`dr
q)ungroup (1#t),-1#t
col1 col2 col3
---------------
na test1 uat
na test3 uat
na test3 prd
na test3 dr

Like my answer on your other related question (How can I prep this table for ungroup), you can ungroup one column at a time if you use a custom ungrouper:
q){#[x where count each x y;y;:;raze x y]}/[t;`col1`col3]
col1 col2 col3
---------------
na test1 uat
na test2 prd
emea test2 prd
na test3 uat
na test3 prd
na test3 dr

Related

Pull values from one column that match with 2 separate values in another column in PostgreSQL

Let me share an example:
Col1 Col2
123 A
456 B
234 A
456 A
098 A
567 B
567 A
I need a postgresSQL query which returns something like
Result
456
567
both values 456, 567 from Col1 match with values A and B from Col2.
Group by Col1 and count the distinct Col2 values. Get only these with count = 2.
select col1 from the_table
where col2 in ('A','B')
group by col1
having count(distinct col2) = 2;
DB-fiddle

Trimming column values while selecting with InfluxQL in Grafana Dashboard

I have a table like below. Col1 values has parenthesis like val1(12). But when I write a InfluxQL query I want to remove parenthesis and just get remaining. When the InfluxQL query runs in the output Val1(12) will be Val1
mytable:
Col1
Col2
Col3
Col4
val1(12)
332
0
1
val2(4234)
222
0
1
val3(221)
111
0
1
If i write select * from mytable it wil give as below :
Col1
Col2
Col3
Col4
val1(12)
332
0
1
val2(4234)
222
0
1
val3(221)
111
0
1
But i want the paranthesis to be removed after i run the sql like below :
Col1
Col2
Col3
Col4
val1
332
0
1
val2
222
0
1
val3
111
0
1
I couldnt find a solution for this. Should i use trim or wildcard or regex to do this? InfluxDB shell version is 1.7.6.
We will run this influxql in grafana dashboard.
I think you'll want to make use of SUBSTRING.
e.g.
SELECT SUBSTRING(Col1,0 CHARINDEX('(',Col1)), Col2, Col3
FROM MyTable

Select Count (distinct col) query to show number of rows and column in result - postgresql

I have a PostgreSQL table with few columns, col1, col2, col3, col4. I want to count how much rows of unique values are in col3. The results needs to be showing number of rows and values too. How do I form such a query. I am using PgAdmin4.
col1. col2. col3. col4.
x1 y1 123 xx-xx-xxxx
x2 y2 123 xx-xx-xxxx
x3 y3 123 xx-xx-xxxx
x4 y4 111 xx-xx-xxxx
x5 y5 111 xx-xx-xxxx
I tried using select count( distinct col3) from table_ where col3_ts >'2019-09-17'
But it counts/shows number of all distinct rows only i.e. a number only like 8999.
The example results are like:
#. col3. # of rows.
-----------------------
1. 123 3
2. 111 2
-----------------------
This is the classic GROUP BY use case:
select col3, count(*) from table_ where col3_ts >'2019-09-17' GROUP BY col3.

create a table where one insert is a batch

I want to create a table where one insert is a batch and if there are the same rows trying to insert again it should throw an error.
Here is a simple example.
This is one insert, if we try to insert these rows again it should throw an error.(It should not insert)
col1 col2 col3 col4(ID)
row1 a 0.1 xyz 1
row2 b 0.2 abc 1
row3 c 0.3 pqr 1
Now I just changed insert little bit this should be as a new insert.
col1 col2 col3 col4(ID)
row1 a 0.1 xyz 2
row2 b 0.211 abc 2
row3 c 0.3 pqr 2
I tried composite primary key but I was missing something. I'm seeing this error
ERROR: duplicate key value violates the unique constraint.
I want to throw an error when all three rows are repeated. If anything is changed in these 3 rows it should be a new insert.

TSQL, Pivot rows into single columns

Before, I had to solve something similar:
Here was my pivot and flatten for another solution:
I want to do the same thing on the example below but it is slightly different because there are no ranks.
In my previous example, the table looked like this:
LocationID Code Rank
1 123 1
1 124 2
1 138 3
2 999 1
2 888 2
2 938 3
And I was able to use this function to properly get my rows in a single column.
-- Check if tables exist, delete if they do so that you can start fresh.
IF OBJECT_ID('tempdb.dbo.#tbl_Location_Taxonomy_Pivot_Table', 'U') IS NOT NULL
DROP TABLE #tbl_Location_Taxonomy_Pivot_Table;
IF OBJECT_ID('tbl_Location_Taxonomy_NPPES_Flattened', 'U') IS NOT NULL
DROP TABLE tbl_Location_Taxonomy_NPPES_Flattened;
-- Pivot the original table so that you have
SELECT *
INTO #tbl_Location_Taxonomy_Pivot_Table
FROM [MOAD].[dbo].[tbl_Location_Taxonomy_NPPES] tax
PIVOT (MAX(tax.tbl_lkp_Taxonomy_Seq)
FOR tax.Taxonomy_Rank in ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15])) AS pvt
-- ORDER BY Location_ID
-- Flatten the tables.
SELECT Location_ID
,max(piv.[1]) as Tax_Seq_1
,max(piv.[2]) as Tax_Seq_2
,max(piv.[3]) as Tax_Seq_3
,max(piv.[4]) as Tax_Seq_4
,max(piv.[5]) as Tax_Seq_5
,max(piv.[6]) as Tax_Seq_6
,max(piv.[7]) as Tax_Seq_7
,max(piv.[8]) as Tax_Seq_8
,max(piv.[9]) as Tax_Seq_9
,max(piv.[10]) as Tax_Seq_10
,max(piv.[11]) as Tax_Seq_11
,max(piv.[12]) as Tax_Seq_12
,max(piv.[13]) as Tax_Seq_13
,max(piv.[14]) as Tax_Seq_14
,max(piv.[15]) as Tax_Seq_15
-- JOIN HERE
INTO tbl_Location_Taxonomy_NPPES_Flattened
FROM #tbl_Location_Taxonomy_Pivot_Table piv
GROUP BY Location_ID
So, then here is the data I would like to work with in this example.
LocationID Foreign Key
2 2
2 670
2 2902
2 5389
3 3
3 722
3 2905
3 5561
So I have some data that is formatted like this:
I have used pivot on data like this before--But the difference was it had a rank also. Is there a way to get my foreign keys to show up in this format using a pivot?
locationID FK1 FK2 FK3 FK4
2 2 670 2902 5389
3 3 722 2905 5561
Another way I'm looking to solve this is like this:
Another way I could look at doing this is I have the values in:
this form as well:
LocationID Address_Seq
2 670, 5389, 2902, 2,
3 722, 5561, 2905, 3
etc
is there anyway I can get this to be the same?
ID Col1 Col2 Col3 Col4
2 670 5389, 2902, 2
This, adding a rank column and reversing the orders, should gives you what you require:
SELECT locationid, [4] col1, [3] col2, [2] col3, [1] col4
FROM
(
SELECT locationid, foreignkey,rank from #Pivot_Table ----- temp table with a rank column
) x
PIVOT (MAX(x.foreignkey)
FOR x.rank in ([4],[3],[2],[1]) ) pvt