Select records in Postgres database without any null values - postgresql

Instead of stating each column name individually, is there a more efficient way to select all rows which do not have any nulls from a table in a Postgres database?
For example, if there are 20 columns in a table, how to avoid typing out each of those columns individually?

Just check the whole row:
select *
from my_table
where my_table is not null
my_table is not null is only true if all columns in that rows are not null.

Related

DB2 INSERT INTO SELECT statment to copy rows into the same table not allowing multiple rows

I've been looking for an answer to this question for a few days and can't find anything referencing this specific issue.
First of all, should it work if I want to use an INSERT INTO SELECT statement to copy over rows of a table, back into the same table but with new id's and 1 of the column modified?
Example:
INSERT INTO TABLE_A (column1, column2, column3) SELECT column1, 'value to change', column3 from TABLE_A where column 2 = 'original value';
When I try this on a DB2 database, I'm getting the following error:
INVALID MULTIPLE-ROW INSERT. SQLCODE=-533, SQLSTATE=21501, DRIVER=4.18.60
If I run the same statement but I put a specific ID to return in the select statement, ensuring only 1 row is returned, then the statement works. But that goes against what I'm trying to do which is copy multiple rows from the same table into itself, while updating a specific column to a new value.
Thanks everyone!
It works fine for me without error on Db2 11.1.4.0
CREATE TABLE TABLE_A( column1 int , column2 varchar(16), column3 int)
INSERT INTO TABLE_A values (1,'original value',3)
INSERT INTO TABLE_A (column1, column2, column3)
SELECT column1, 'value to change', column3 from TABLE_A where column2 = 'original value'
SELECT * FROM TABLE_A
returns
COLUMN1|COLUMN2 |COLUMN3
-------|---------------|-------
1|original value | 3
1|value to change| 3
maybe there is something you are not telling us....
You don't mention your platform and version, but the docs seems pretty clear..
IBM LUW 11.5
A multiple-row INSERT into a self-referencing table is invalid.
First google results
An INSERT operation with a subselect attempted to insert multiple rows
into a self-referencing table. The subselect of the INSERT operation
should return no more than one row of data. System action: The INSERT
statement cannot be executed. The contents of the object table are
unchanged. Programmer response: Examine the search condition of the
subselect to make sure that no more than one row of data is selected.
EDIT You've apparently got a self-referencing constraint on the table. Ex: EMPLOYEES table with a MANAGER column defined as a FK self-referencing back to the EMPLOYEES table.
Db2 simply doesn't support what you are trying to do.
You need to a temporary table to hold the modified rows.
Optionally, assuming that your table has a primary key, try using the MERGE statement instead.

Postgres: insert value from another table as part of multi-row insert?

I am working in Postgres 9.6 and would like to insert multiple rows in a single query, using an INSERT INTO query.
I would also like, as one of the values inserted, to select a value from another table.
This is what I've tried:
insert into store_properties (property, store_id)
values
('ice cream', select id from store where postcode='SW1A 1AA'),
('petrol', select id from store where postcode='EC1N 2RN')
;
But I get a syntax error at the first select. What am I doing wrong?
Note that the value is determined per row, i.e. I'm not straightforwardly copying over values from another table.
demo:db<>fiddle
insert into store_properties (property, store_id)
values
('ice cream', (select id from store where postcode='SW1A 1AA')),
('petrol', (select id from store where property='EC1N 2RN'))
There were some missing braces. Each data set has to be surrounded by braces and the SELECT statements as well.
I don't know your table structure but maybe there is another error: The first data set is filtered by a postcode column, the second one by a property column...

Redshift move data from one table to another table

insert into table1 select * from table2
tabl1 have one addition column as compare to table2.
how can i move the data of other columns from table2 to table1 without set the all columns name individually in the insert query for Redshift ?
Is there any idea ?
If you really want to do this you have to put all extra columns at the table2 at the end of the column list and then you'll be able to set nulls or values after select star like this:
insert into table1
select *, null, null, null
from table2

DB2 - REPLACE INTO SELECT from table

Is there a way in db2 where I can replace the entire table with just selected rows from the same table ?
Something like REPLACE into tableName select * from tableName where col1='a';
(I can export the selected rows, delete the entire table and load/import again, but I want to avoid these steps and use a single query).
Original table
col1 col2
a 0 <-- replace all rows and replace with just col1 = 'a'
a 1 <-- col1='a'
b 2
c 3
Desired resultant table
col1 col2
a 0
a 1
Any help appreciated !
Thanks.
This is a duplicate of my answer to your duplicate question:
You can't do this in a single step. The locking required to truncate the table precludes you querying the table at the same time.
The best option you would have is to declare a global temporary table (DGTT) and insert the rows you want into it, truncate the source table, and then insert the rows from the DGTT back into the source table. Something like:
declare global temporary table t1
as (select * from schema.tableName where ...)
with no data
on commit preserve rows
not logged;
insert into session.t1 select * from schema.tableName;
truncate table schema.tableName immediate;
insert into schema.tableName select * from session.t1;
I know of no way to do what you're asking in one step...
You'd have to select out to a temporary table then copy back.
But I don't understand why you'd need to do this in the first place. Lets assume there was a REPLACE TABLE command...
REPLACE TABLE mytbl WITH (
SELECT * FROM mytbl
WHERE col1 = 'a' AND <...>
)
Why not simply delete the inverse set of rows...
DELETE FROM mytbl
WHERE NOT (col1 = 'a' AND <...>)
Note the comparisons done in the WHERE clause are the exact same. You just wrap them in a NOT ( ) to delete the ones you don't want to keep.

How to update individual column cell, row wise in sql server 2008?

I have a table variable #temp having empty columns test001 till test0048.
I want to update each column individually for each row with some voltage values
Output should be like the below table:
test001 test002 test003
101.6 NULL 99.25
NULL 102.5 89.45
NULL 68.45 103.0
I can do it using while loop and update cell for individual columns but while loop
takes more than 3 minutes to execute thousands of records.
Is there is any alternate way to update the columns row-wise?
Thanks in advance