I am working on the output of a postgres subquery and have the table with 20 columns(generated using WITH clause).
The table looks something like this
col1 col2 col3 --- col20
4 4 24
6 6 45
5 5 66
5 5 12
I want to write a write a query that remove the duplicated column. I tried by select all the columns except the 2nd. But I could not find a better way to do that.
The expected output is:
col1 col3 ------ col20
4 24
6 45
5 66
5 12
Thanks
Related
I have a table named "building" in PostgreSQL. It has several columns and millions of rows. Each row has a unique id. It looks like
id tags height building:levels area
1 apartment 58 m 17 109
2 apartment null null 111
3 shed 7 2 75sqm
4 greenhouse 6m 3 159
5 industrial 16 2;4 105
6 commercial 27 8 474
And I have another csv file with cleaned data for column height and building:levels. The csv looks like:
id height building:levels
1 58 17
3 7 2
4 6 3
5 16 4
6 27 8
I want to join the csv file back to the table on the server, and the final outcome might look like this:
id tags height building:levels area
1 apartment 58 17 109
2 apartment null null 111
3 shed 7 2 75sqm
4 greenhouse 6 3 159
5 industrial 16 4 105
6 commercial 27 8 474
I want to replace the values in height and building:levels where the id are the same in the table and the csv file. I've tried to import data from csv but it didn't replace the values, only adding new rows. How can I achieve this?
You can import the csv file data in a temporary table first :
CREATE TEMPORARY TABLE IF NOT EXISTS building_temp LIKE building ;
COPY building_temp (id, height, "building:levels") FROM your_csv_file.csv WITH options_list ;
See the manual for building the options_list.
and then update the building table from that temporary table
UPDATE building AS b
SET height = bt.height
, "building:levels" = bt."building:levels"
FROM building_temp AS bt
WHERE b.id = bt.id
Its an example of a table from PostgreSQL.
I learning the SQL query and cant find anything to help me pass this.
What I`m working to achieve is:
Return UNIQ(DISTINCT) values of WNR WHEN tdate >='2020-01-13 00:00:01.757000'
WNR tdate T1 T2 T3
2 '2020-01-06 00:05:23.229000' 8 18 15
2 '2020-01-06 00:05:23.725000' 11 4 7
2 '2020-01-06 00:05:31.578000' 19 12 6
3 '2020-01-13 00:00:01.655000' 9 9 3
3 '2020-01-13 00:00:01.757000' 5 11 16
3 '2020-01-13 00:00:05.778000' 16 17 16
4 '2020-01-20 00:00:11.925000' 18 13 4
4 '2020-01-20 00:00:12.177000' 18 3 15
4 '2020-01-20 00:00:12.694000' 7 12 7
5 '2020-01-27 00:00:04.860000' 19 3 14
5 '2020-01-27 00:00:05.056000' 14 18 8
5 '2020-01-27 00:00:05.107000' 18 7 14
Result expected should be 3,4,5
Thank you!
To select distinct values in Postgresql you can use DISTINCT clause.
From Postgresql documentation: SELECT DISTINCT eliminates duplicate rows from the result. SELECT DISTINCT ON eliminates rows that match on all the specified expressions. SELECT ALL (the default) will return all candidate rows, including duplicates. (See DISTINCT Clause below.)
SELECT DISTINCT WNR
FROM table_name
WHERE tdate >='2020-01-13 00:00:01.757000';
Example table:
table:([]col1:20 40 30 0w;col2:4?4;col3: 100 200 0w 300)
My solution:
{.[table;(where 0w=table[x];x);:;0n]}'[exec c from meta table where t="f"]
There is a way I am not seeing I'm sure. This just returns a list of for each change which I don't want. I just want the original table returned with nulls replaced.
Thanks in advance!
It would be good to flesh out your question a bit more. Are you always expecting it to be float columns? Will the table have many columns? Will there be string/sym columns mixed in that might complicate things?
If your table has a small number of columns you could just do an update
q)show t
col1 col2 col3
--------------
20 1 100
40 2 200
30 2 0w
0w 1 300
q)inftonull:{(x where x=0w):0n;x}
q)update inftonull col1, inftonull col3 from t
col1 col2 col3
--------------
20 2 100
40 1 200
30 0
3 300
If you think the column names might change or have a very large number of columns you could try a functional update (where you can pass the column names as parameters)
q){![t;();0b;x!inftonull,/:x,:()]}`col1`col3
col1 col2 col3
--------------
20 1 100
40 2 200
30 2
1 300
If your table is comprised of only numeric data something like
q)flip{(x where x=.Q.t[type x]$0w):x 0N;x}each flip t
col1 col2 col3
--------------
20 2 100
40 1 200
30 0
3 300
Might work, which tries to account for the fact the numeric data has different types.
If your data is going to contain string/sym columns the last example won't work
Hi I have a scenario to add a all previous values ...
Input is this of a column of a table
Col
3
5
4
6
9
7
8
And I need output in this manner:
Col Col2
3 3
5 8
4 12
6 18
9 27
7 34
8 42
Kindly reply asap
Regards,
Neeraj
As long as you have a field to order by, you can use SUM ... OVER to do the running sum;
SELECT Col, SUM(Col) OVER (ORDER BY id) Col2
FROM Table1
ORDER BY id;
An SQLfiddle to test with.
I would like a query that will show a sum of columns with a default value for missing data. For example assume I have a table as follows:
type_lookup:
id name
-----------
1 self
2 manager
3 peer
And a table as follows
data:
id type_lookup_id value
--------------------------
1 1 1
2 1 4
3 2 9
4 2 1
5 2 9
6 1 5
7 2 6
8 1 2
9 1 1
After running a query I would like a result set as follows:
type_lookup_id value
----------------------
1 13
2 25
3 0
I would like all rows in type_lookup table to be included in the result set - even if they don't appear in the data table.
It's a bit hard to read your data layout, but something like the following should do the trick:
SELECT tl.type_lookup_id, tl.name, sum(da.type_lookup_id) how_much
from type_lookup tl
left outer join data da
on da.type_lookup_id = tl.type_lookup_id
group by tl.type_lookup_id, tl.name
order by tl.type_lookup_id
[EDIT]
...subsequently edited by changing count() to sum().