The query result changes in remote and local database - db2

The below table is created in local database and remote databases.
CREATE TABLE EMPLOYEE1 ( EMP_ID INTEGER, EMP_NAME VARCHAR(10), EMP_DEPT VARCHAR(10) );
Insert the below rows in tables created in both the databases.
INSERT INTO EMPLOYEE1 (EMP_ID, EMP_NAME,EMP_DEPT)
VALUES (1,'A','IT'), (2,'B','IT'), (3,'C','SALES'), (4,'D','SALES'), (5,'E','ACCOUNTS'), (6,'F','ACCOUNTS'), (7,'G','HR'), (8,'H','HR');
COMMIT;
If i run the below query in local database of my system then the query result is correct.i.e it is returning all the rows in the table as the query exactly has to do. But the same query if i run in remote database then only 4 rows are returned,which is a wrong result.
SELECT * FROM EMPLOYEE1 WHERE (EMP_DEPT NOT IN ('IT','SALES') OR EMP_DEPT IN ('IT','SALES'));
Can anyone suggest why the query behavior changes?

As per your query, you want to select all the records. Then simply you can use the following
SELECT * FROM EMPLOYEE1
What is the purpose of this condition?
WHERE (EMP_DEPT NOT IN ('IT','SALES') OR EMP_DEPT IN ('IT','SALES'))

Related

Redshift insert a date value into a table

insert into table1 (ID,date)
select
ID,sysdate
from table2
assume i insert a record into table2 with value ID:1,date:2023-1-1
the expected result is update the ID of table1 base on the ID from table2 and update the value of date of table1 base on the sysdate from table2.
select *
from table1;
the expected result after running the insert statement will be
ID
date
1
2023-1-6
but what i get is:
ID
date
1
2023-1-1
I see a few possibilities based on the information given:
You say "the expected result is update the ID of table1 base on the ID from table2" and this begs the question - did ID = 1 exist in table1 BEFORE you ran the INSERT statement? If so are you expecting that the INSERT will update the value for ID #1? Redshift doesn't enforce or check uniqueness of primary keys and you would get 2 rows in the table1 in this case. Is this what is happening?
SYSDATE on Redshift provides the start timestamp of the current transaction, NOT the current statement. Have you had the current transaction open since the 1st?
You didn't COMMIT the results (or the statement failed) and are checking from a different session. It could also be that the transaction started before in the second session before the COMMIT completed. Working with MVCC across multiple sessions can trip anyone up.
There are likely other possible explanations. If you could provide DDL, sample data, and a simple test case so that others can recreate what you are seeing it would greatly narrow down the possibilities.

How to use the same common table expression in two consecutive psql statements?

I'm trying to perform a pretty basic operation with a few steps:
SELECT data from table1
Use id column from my selected table to remove data from table2
Insert the selected table from step 1 into table2
I would imagine that this would work
begin;
with temp as (
select id
from table1
)
delete from table2
where id in (select id from temp);
insert into table2 (id)
select id from temp;
commit;
But I'm getting an error saying that temp is not defined during my insert step?
Only other post I found about this is this one but it didn't really answer my question.
Thoughts?
From Postgres documentation:
WITH provides a way to write auxiliary statements for use in a larger
query. These statements, which are often referred to as Common Table
Expressions or CTEs, can be thought of as defining temporary tables
that exist just for one query.
If you need a temp table for more than one query you can do instead:
begin;
create temp table temp_table as (
select id
from table1
);
delete from table2
where id in (select id from temp_table);
insert into table2 (id)
select id from temp_table;
commit;

How to create INSERT logs from SELECTs?

As school work we're supposed to create a table that logs all operations done by users on another table. To be more clear, say I have table1 and logtable, table1 can contain any info (names, ids, job, etc), logtable contains info on who did what, when on table1. Using a function and a trigger I managed to get the INSERT, DELETE and UPDATE operations to be a logged in logtable, but we're also supposed to keep a log of SELECTs. To be more specific about the SELECTs, in a View if you do a SELECT, this is supposed to be logged into logtable via an INSERT, essentially the logtable is supposed to have a new row with information telling that somebody did a SELECT. My problem is that I can't figure out any way to accomplish this as SELECTs can't make use of triggers and in turn can't make use of functions, and rules don't allow for two different operations to take place. The only thing that came close was using query logs, however as the database is the school's and not mine I can't make any use of them.
Here is a rough example of what I'm working with (in reality tstamp has hours minutes and such):
id operation hid tablename who tstamp val_new val_old
x INSERT x table1 name YYYY-MM-DD newValues previousValues
That works as intended, but what I also need to get to work is this (Note: Whether val_new and old come out as empty or not in this case is not a concern):
id operation hid tablename who tstamp val_new val_old
x SELECT x table1 name YYYY-MM-DD NULL previousValues
Any and all help is appreciated.
Here is an example:
CREATE TABLE public.test (id integer PRIMARY KEY, value integer);
INSERT INTO test VALUES (1,42),(2,13);
CREATE TABLE test_log(id serial primary key, dbuser varchar,datetime timestamp);
-- get_test() inserts username / timestamp into log, then returns all rows
-- of test
CREATE OR REPLACE FUNCTION get_test() RETURNS SETOF test AS '
INSERT INTO test_log (dbuser,datetime)VALUES(current_user,now());
SELECT * FROM test;'
language 'sql';
-- now a view returns the full row set of test by instead calling our function
CREATE VIEW test_v AS SELECT * FROM get_test();
SELECT * FROM test_v;
id | value
----+-------
1 | 42
2 | 13
(2 rows)
SELECT * FROM test_log;
id | dbuser | datetime
----+----------+----------------------------
1 | postgres | 2020-11-30 12:42:00.188341
(1 row)
If your table has many rows and/or the selects are complex, you don't want to use this view for performance reasons.

Select columns with null values postgresql

I'm working on a postgreSQL database with 22 table. I need a query which returns the columns with null values. May be a static sql statement that I can launch to each table.
I would be pleased to get some help.
Best.
Assuming that you run VACUUM ANALYZE periodically, pg_stats.null_frac can help you to get that:
--Get columns "filled" entirely with null values
SELECT
schemaname,
tablename,
attname,
null_frac
FROM
pg_stats
WHERE
null_frac = 1.0
AND schemaname = 'yourschema'

Is select * in t-sql deterministic?

Specifically I need to know if the query
select * from [some_table]
will always return the columns in the same order.
I've seen no indication that it is non deterministic but I cannot assume this is true due to the specifications of my application.
Can anyone point me at documentation one way or the other?
I've had no luck with my searches.
Thanks in advance.
SELECT * FROM [some_table]
returns always the same order of column in the same DB.
N.B.
I assume you have two dbs
First DB named DBA
Second DB named DBB
In either DB exists a table TRIAL
In DBA TRIAL table has these fields in this order:
id, name, surname
In DBB TRIAL table has these fields in this order:
id, surname, name
When you execute
SELECT * FROM DBA..TRIAL
you'll have id, name, surname
The same query on DBB will result:
id, surname, name
When using SELECT * the columns are returned in a) the order the tables appear in the FROM statement b) the order the columns appear in the table in the database.
From MSDN: "The columns are returned by table or view, as specified in the FROM clause, and in the order in which they exist in the table or view."
http://msdn.microsoft.com/en-us/library/ms176104.aspx
It is deterministic as long as the schema of the database is not modified.
Here is a example where the select * will change the order of the fields without changing the actual structure of the table:
Create table AAA
(
field1 varchar(10),
field2 varchar(10),
field3 varchar(10)
);
select * --> field1 ,field2 ,field3
Now you do
alter table AAA drop column field2;
alter table AAA add field2 varchar(10)
select * --> field1 ,field3 , field2
Basically, I would not count on the order of the fields and would definitely specify them in the select clause.